Search Results for author: Yuan-Fang Li

Found 31 papers, 14 papers with code

Simple or Complex? Complexity-Controllable Question Generation with Soft Templates and Deep Mixture of Experts Model

no code implementations13 Oct 2021 Sheng Bi, Xiya Cheng, Yuan-Fang Li, Lizhen Qu, Shirong Shen, Guilin Qi, Lu Pan, Yinlin Jiang

The ability to generate natural-language questions with controlled complexity levels is highly desirable as it further expands the applicability of question generation.

Question Generation Question Similarity

Semi-supervised Network Embedding with Differentiable Deep Quantisation

no code implementations20 Aug 2021 Tao He, Lianli Gao, Jingkuan Song, Yuan-Fang Li

Learning accurate low-dimensional embeddings for a network is a crucial task as it facilitates many downstream network analytics tasks.

Link Prediction Network Embedding +1

Unsupervised Domain-adaptive Hash for Networks

no code implementations20 Aug 2021 Tao He, Lianli Gao, Jingkuan Song, Yuan-Fang Li

Abundant real-world data can be naturally represented by large-scale networks, which demands efficient and effective learning algorithms.

Link Prediction Node Classification

Exploiting Scene Graphs for Human-Object Interaction Detection

1 code implementation ICCV 2021 Tao He, Lianli Gao, Jingkuan Song, Yuan-Fang Li

Human-Object Interaction (HOI) detection is a fundamental visual task aiming at localizing and recognizing interactions between humans and objects.

Human-Object Interaction Detection

XL-Sum: Large-Scale Multilingual Abstractive Summarization for 44 Languages

1 code implementation25 Jun 2021 Tahmid Hasan, Abhik Bhattacharjee, Md Saiful Islam, Kazi Samin, Yuan-Fang Li, Yong-Bin Kang, M. Sohel Rahman, Rifat Shahriyar

XL-Sum induces competitive results compared to the ones obtained using similar monolingual datasets: we show higher than 11 ROUGE-2 scores on 10 languages we benchmark on, with some of them exceeding 15, as obtained by multilingual training.

Abstractive Text Summarization

Adaptive Knowledge-Enhanced Bayesian Meta-Learning for Few-shot Event Detection

no code implementations20 May 2021 Shirong Shen, Tongtong Wu, Guilin Qi, Yuan-Fang Li, Gholamreza Haffari, Sheng Bi

Event detection (ED) aims at detecting event trigger words in sentences and classifying them into specific event types.

Event Detection Few-Shot Learning

Multi-Scale Contrastive Siamese Networks for Self-Supervised Graph Representation Learning

1 code implementation12 May 2021 Ming Jin, Yizhen Zheng, Yuan-Fang Li, Chen Gong, Chuan Zhou, Shirui Pan

To overcome this problem, inspired by the recent success of graph contrastive learning and Siamese networks in visual representation learning, we propose a novel self-supervised approach in this paper to learn node representations by enhancing Siamese self-distillation with multi-scale contrastive learning.

Contrastive Learning Graph Representation Learning

Temporal Cascade and Structural Modelling of EHRs for Granular Readmission Prediction

no code implementations4 Feb 2021 Bhagya Hettige, Weiqing Wang, Yuan-Fang Li, Suong Le, Wray Buntine

Although a point process (e. g., Hawkes process) is able to model a cascade temporal relationship, it strongly relies on a prior generative process assumption.

Decision Making Point Processes +1

Curriculum-Meta Learning for Order-Robust Continual Relation Extraction

2 code implementations6 Jan 2021 Tongtong Wu, Xuekai Li, Yuan-Fang Li, Reza Haffari, Guilin Qi, Yujin Zhu, Guoqiang Xu

We propose a novel curriculum-meta learning method to tackle the above two challenges in continual relation extraction.

Curriculum Learning Meta-Learning +2

Retrieve, Program, Repeat: Complex Knowledge Base Question Answering via Alternate Meta-learning

1 code implementation29 Oct 2020 Yuncheng Hua, Yuan-Fang Li, Gholamreza Haffari, Guilin Qi, Wei Wu

However, this comes at the cost of manually labeling similar questions to learn a retrieval model, which is tedious and expensive.

Knowledge Base Question Answering Meta-Learning

Few-Shot Complex Knowledge Base Question Answering via Meta Reinforcement Learning

1 code implementation EMNLP 2020 Yuncheng Hua, Yuan-Fang Li, Gholamreza Haffari, Guilin Qi, Tongtong Wu

Our method achieves state-of-the-art performance on the CQA dataset (Saha et al., 2018) while using only five trial trajectories for the top-5 retrieved questions in each support set, and metatraining on tasks constructed from only 1% of the training set.

Knowledge Base Question Answering Meta Reinforcement Learning +1

Less is More: Data-Efficient Complex Question Answering over Knowledge Bases

1 code implementation29 Oct 2020 Yuncheng Hua, Yuan-Fang Li, Guilin Qi, Wei Wu, Jingyao Zhang, Daiqing Qi

Our framework consists of a neural generator and a symbolic executor that, respectively, transforms a natural-language question into a sequence of primitive actions, and executes them over the knowledge base to compute the answer.

Multi-hop Question Answering Question Answering

Understanding Unnatural Questions Improves Reasoning over Text

no code implementations COLING 2020 Xiao-Yu Guo, Yuan-Fang Li, Gholamreza Haffari

A prominent approach to this task is based on the programmer-interpreter framework, where the programmer maps the question into a sequence of reasoning actions which is then executed on the raw text by the interpreter.

Question Answering

Knowledge-enriched, Type-constrained and Grammar-guided Question Generation over Knowledge Bases

1 code implementation COLING 2020 Sheng Bi, Xiya Cheng, Yuan-Fang Li, Yongzhen Wang, Guilin Qi

Question generation over knowledge bases (KBQG) aims at generating natural-language questions about a subgraph, i. e. a set of (connected) triples.

Question Generation

Boosting House Price Predictions using Geo-Spatial Network Embedding

1 code implementation1 Sep 2020 Sarkar Snigdha Sarathi Das, Mohammed Eunus Ali, Yuan-Fang Li, Yong-Bin Kang, Timos Sellis

Extensive experiments with a large number of regression techniques show that the embeddings produced by our proposed GSNE technique consistently and significantly improve the performance of the house price prediction task regardless of the downstream regression model.

Network Embedding

Learning from the Scene and Borrowing from the Rich: Tackling the Long Tail in Scene Graph Generation

no code implementations13 Jun 2020 Tao He, Lianli Gao, Jingkuan Song, Jianfei Cai, Yuan-Fang Li

Despite the huge progress in scene graph generation in recent years, its long-tail distribution in object relationships remains a challenging and pestering issue.

Graph Generation Scene Graph Generation +1

Generating Question Titles for Stack Overflow from Mined Code Snippets

1 code implementation20 May 2020 Zhipeng Gao, Xin Xia, John Grundy, David Lo, Yuan-Fang Li

Stack Overflow has been heavily used by software developers as a popular way to seek programming-related information from peers via the internet.

Software Engineering

$\mathtt{MedGraph:}$ Structural and Temporal Representation Learning of Electronic Medical Records

1 code implementation8 Dec 2019 Bhagya Hettige, Yuan-Fang Li, Weiqing Wang, Suong Le, Wray Buntine

To address these limitations, we present $\mathtt{MedGraph}$, a supervised EMR embedding method that captures two types of information: (1) the visit-code associations in an attributed bipartite graph, and (2) the temporal sequencing of visits through a point process.

Point Processes Representation Learning

Gaussian Embedding of Large-scale Attributed Graphs

1 code implementation2 Dec 2019 Bhagya Hettige, Yuan-Fang Li, Weiqing Wang, Wray Buntine

Graph embedding methods transform high-dimensional and complex graph contents into low-dimensional representations.

Graph Embedding Link Prediction +1

Question Generation from Paragraphs: A Tale of Two Hierarchical Models

no code implementations8 Nov 2019 Vishwajeet Kumar, Raktim Chaki, Sai Teja Talluri, Ganesh Ramakrishnan, Yuan-Fang Li, Gholamreza Haffari

Specifically, we propose (a) a novel hierarchical BiLSTM model with selective attention and (b) a novel hierarchical Transformer architecture, both of which learn hierarchical representations of paragraphs.

Question Generation

Putting the Horse before the Cart: A Generator-Evaluator Framework for Question Generation from Text

no code implementations CONLL 2019 Vishwajeet Kumar, Ganesh Ramakrishnan, Yuan-Fang Li

The \textit{generator} is a sequence-to-sequence model that incorporates the \textit{structure} and \textit{semantics} of the question being generated.

Question Generation

One Network for Multi-Domains: Domain Adaptive Hashing with Intersectant Generative Adversarial Network

1 code implementation1 Jul 2019 Tao He, Yuan-Fang Li, Lianli Gao, Dongxiang Zhang, Jingkuan Song

We evaluate our framework on {four} public benchmark datasets, all of which show that our method is superior to the other state-of-the-art methods on the tasks of object recognition and image retrieval.

Image Retrieval Object Recognition

Vector and Line Quantization for Billion-scale Similarity Search on GPUs

1 code implementation2 Jan 2019 Wei Chen, Jincai Chen, Fuhao Zou, Yuan-Fang Li, Ping Lu, Qiang Wang, Wei Zhao

The inverted index structure is amenable to GPU-based implementations, and the state-of-the-art systems such as Faiss are able to exploit the massive parallelism offered by GPUs.

Quantization

Putting the Horse Before the Cart:A Generator-Evaluator Framework for Question Generation from Text

no code implementations15 Aug 2018 Vishwajeet Kumar, Ganesh Ramakrishnan, Yuan-Fang Li

The {\it generator} is a sequence-to-sequence model that incorporates the {\it structure} and {\it semantics} of the question being generated.

Question Generation

CATERPILLAR: Coarse Grain Reconfigurable Architecture for Accelerating the Training of Deep Neural Networks

no code implementations1 Jun 2017 Yuan-Fang Li, Ardavan Pedram

Our results suggest that smaller networks favor non-batched techniques while performance for larger networks is higher using batched operations.

Cannot find the paper you are looking for? You can Submit a new open access paper.