Search Results for author: Kunze Wang

Found 9 papers, 6 papers with code

Re-Temp: Relation-Aware Temporal Representation Learning for Temporal Knowledge Graph Completion

no code implementations24 Oct 2023 Kunze Wang, Soyeon Caren Han, Josiah Poon

Temporal Knowledge Graph Completion (TKGC) under the extrapolation setting aims to predict the missing entity from a fact in the future, posing a challenge that aligns more closely with real-world prediction problems.

Knowledge Graph Completion Relation +2

Graph Neural Networks for Text Classification: A Survey

no code implementations23 Apr 2023 Kunze Wang, Yihao Ding, Soyeon Caren Han

Text Classification is the most essential and fundamental problem in Natural Language Processing.

graph construction text-classification +1

InducT-GCN: Inductive Graph Convolutional Networks for Text Classification

1 code implementation1 Jun 2022 Kunze Wang, Soyeon Caren Han, Josiah Poon

Under the extreme settings with no extra resource and limited amount of training set, can we still learn an inductive graph-based text classification model?

text-classification Text Classification

Understanding Graph Convolutional Networks for Text Classification

1 code implementation30 Mar 2022 Soyeon Caren Han, Zihan Yuan, Kunze Wang, Siqu Long, Josiah Poon

Graph Convolutional Networks (GCN) have been effective at tasks that have rich relational structure and can preserve global structure information of a dataset in graph embeddings.

graph construction text-classification +1

VICTR: Visual Information Captured Text Representation for Text-to-Vision Multimodal Tasks

1 code implementation COLING 2020 Caren Han, Siqu Long, Siwen Luo, Kunze Wang, Josiah Poon

We propose a new visual contextual text representation for text-to-image multimodal tasks, VICTR, which captures rich visual semantic information of objects from the text input.

Dependency Parsing Sentence

VICTR: Visual Information Captured Text Representation for Text-to-Image Multimodal Tasks

1 code implementation7 Oct 2020 Soyeon Caren Han, Siqu Long, Siwen Luo, Kunze Wang, Josiah Poon

We propose a new visual contextual text representation for text-to-image multimodal tasks, VICTR, which captures rich visual semantic information of objects from the text input.

Ranked #24 on Text-to-Image Generation on MS COCO (Inception score metric)

Dependency Parsing Sentence +1

Cannot find the paper you are looking for? You can Submit a new open access paper.