no code implementations • 9 Nov 2022 • Wannita Takerngsaksiri, Chakkrit Tantithamthavorn, Yuan-Fang Li
However, existing syntax-aware code completion approaches are not on-the-fly, as we found that for every two-thirds of characters that developers type, AST fails to be extracted because it requires the syntactically correct source code, limiting its practicality in real-world scenarios.
no code implementations • 7 Nov 2022 • Xiao-Yu Guo, Yuan-Fang Li, Gholamreza Haffari
Multi-hop reading comprehension requires not only the ability to reason over raw text but also the ability to combine multiple evidence.
1 code implementation • 17 Oct 2022 • Tongtong Wu, Guitao Wang, Jinming Zhao, Zhaoran Liu, Guilin Qi, Yuan-Fang Li, Gholamreza Haffari
We explore speech relation extraction via two approaches: the pipeline approach conducting text-based extraction with a pretrained ASR module, and the end2end approach via a new proposed encoder-decoder model, or what we called SpeechRE.
no code implementations • COLING 2022 • Jiayi Chen, Xiao-Yu Guo, Yuan-Fang Li, Gholamreza Haffari
Answering complex questions that require multi-step multi-type reasoning over raw text is challenging, especially when conducting numerical reasoning.
no code implementations • 17 Aug 2022 • Tao He, Lianli Gao, Jingkuan Song, Yuan-Fang Li
In this paper, we introduce open-vocabulary scene graph generation, a novel, realistic and challenging setting in which a model is trained on a set of base object classes but is required to infer relations for unseen target object classes.
no code implementations • 2 Jun 2022 • Lianli Gao, Pengpeng Zeng, Jingkuan Song, Yuan-Fang Li, Wu Liu, Tao Mei, Heng Tao Shen
To date, visual question answering (VQA) (i. e., image QA and video QA) is still a holy grail in vision and language understanding, especially for video QA.
no code implementations • 21 Mar 2022 • Fatemeh Shiri, Terry Yue Zhuo, Zhuang Li, Van Nguyen, Shirui Pan, Weiqing Wang, Reza Haffari, Yuan-Fang Li
In this paper, we investigate how to exploit paraphrasing methods for the automated generation of large-scale training datasets (in the form of paraphrased utterances and their corresponding logical forms in SQL format) and present our experimental results using real-world data in the maritime domain.
no code implementations • 12 Mar 2022 • Kang Xu, Xiaoqiu Lu, Yuan-Fang Li, Tongtong Wu, Guilin Qi, Ning Ye, Dong Wang, Zheng Zhou
NTM-DMIE is a neural network method for topic learning which maximizes the mutual information between the input documents and their latent topic representation.
1 code implementation • 17 Feb 2022 • Ming Jin, Yu Zheng, Yuan-Fang Li, Siheng Chen, Bin Yang, Shirui Pan
Multivariate time series forecasting has long received significant attention in real-world applications, such as energy consumption and traffic prediction.
1 code implementation • 16 Dec 2021 • Abhik Bhattacharjee, Tahmid Hasan, Wasi Uddin Ahmad, Yuan-Fang Li, Yong-Bin Kang, Rifat Shahriyar
We present CrossSum, a large-scale cross-lingual abstractive summarization dataset comprising 1. 7 million article-summary samples in 1500+ language pairs.
Abstractive Text Summarization
Cross-Lingual Abstractive Summarization
+1
no code implementations • 20 Nov 2021 • Yizhen Zheng, Ming Jin, Shirui Pan, Yuan-Fang Li, Hao Peng, Ming Li, Zhao Li
To overcome the aforementioned problems, we introduce a novel self-supervised graph representation learning algorithm via Graph Contrastive Adjusted Zooming, namely G-Zoom, to learn node representations by leveraging the proposed adjusted zooming scheme.
no code implementations • 9 Nov 2021 • Yong-Bin Kang, Abdur Rahim Mohammad Forkan, Prem Prakash Jayaraman, Natalie Wieland, Elizabeth Kollias, Hung Du, Steven Thomson, Yuan-Fang Li
There has been a recent and rapid shift to digital learning hastened by the pandemic but also influenced by ubiquitous availability of digital tools and platforms now, making digital learning ever more accessible.
no code implementations • Findings (EMNLP) 2021 • Sheng Bi, Xiya Cheng, Yuan-Fang Li, Lizhen Qu, Shirong Shen, Guilin Qi, Lu Pan, Yinlin Jiang
The ability to generate natural-language questions with controlled complexity levels is highly desirable as it further expands the applicability of question generation.
no code implementations • ICLR 2022 • Tongtong Wu, Massimo Caccia, Zhuang Li, Yuan-Fang Li, Guilin Qi, Gholamreza Haffari
In this paper, we thoroughly compare the continual learning performance over the combination of 5 PLMs and 4 veins of CL methods on 3 benchmarks in 2 typical incremental settings.
no code implementations • 29 Sep 2021 • Ming Jin, Yuan-Fang Li, Yu Zheng, Bin Yang, Shirui Pan
Spatiotemporal representation learning on multivariate time series has received tremendous attention in forecasting traffic and energy data.
no code implementations • Findings (EMNLP) 2021 • Xiao-Yu Guo, Yuan-Fang Li, Gholamreza Haffari
Numerical reasoning skills are essential for complex question answering (CQA) over text.
no code implementations • 20 Aug 2021 • Tao He, Lianli Gao, Jingkuan Song, Yuan-Fang Li
Learning accurate low-dimensional embeddings for a network is a crucial task as it facilitates many downstream network analytics tasks.
no code implementations • 20 Aug 2021 • Tao He, Lianli Gao, Jingkuan Song, Yuan-Fang Li
Abundant real-world data can be naturally represented by large-scale networks, which demands efficient and effective learning algorithms.
1 code implementation • ICCV 2021 • Tao He, Lianli Gao, Jingkuan Song, Yuan-Fang Li
Human-Object Interaction (HOI) detection is a fundamental visual task aiming at localizing and recognizing interactions between humans and objects.
no code implementations • 19 Aug 2021 • Tao He, Lianli Gao, Jingkuan Song, Jianfei Cai, Yuan-Fang Li
Scene graphs provide valuable information to many downstream tasks.
1 code implementation • Findings (ACL) 2021 • Tahmid Hasan, Abhik Bhattacharjee, Md Saiful Islam, Kazi Samin, Yuan-Fang Li, Yong-Bin Kang, M. Sohel Rahman, Rifat Shahriyar
XL-Sum induces competitive results compared to the ones obtained using similar monolingual datasets: we show higher than 11 ROUGE-2 scores on 10 languages we benchmark on, with some of them exceeding 15, as obtained by multilingual training.
no code implementations • Findings (ACL) 2021 • Shirong Shen, Tongtong Wu, Guilin Qi, Yuan-Fang Li, Gholamreza Haffari, Sheng Bi
Event detection (ED) aims at detecting event trigger words in sentences and classifying them into specific event types.
1 code implementation • 12 May 2021 • Ming Jin, Yizhen Zheng, Yuan-Fang Li, Chen Gong, Chuan Zhou, Shirui Pan
To overcome this problem, inspired by the recent success of graph contrastive learning and Siamese networks in visual representation learning, we propose a novel self-supervised approach in this paper to learn node representations by enhancing Siamese self-distillation with multi-scale contrastive learning.
no code implementations • 4 Feb 2021 • Bhagya Hettige, Weiqing Wang, Yuan-Fang Li, Suong Le, Wray Buntine
Although a point process (e. g., Hawkes process) is able to model a cascade temporal relationship, it strongly relies on a prior generative process assumption.
2 code implementations • 6 Jan 2021 • Tongtong Wu, Xuekai Li, Yuan-Fang Li, Reza Haffari, Guilin Qi, Yujin Zhu, Guoqiang Xu
We propose a novel curriculum-meta learning method to tackle the above two challenges in continual relation extraction.
no code implementations • Asian Chapter of the Association for Computational Linguistics 2020 • Vishwajeet Kumar, Manish Joshi, Ganesh Ramakrishnan, Yuan-Fang Li
Question generation (QG) has recently attracted considerable attention.
1 code implementation • EMNLP 2020 • Yuncheng Hua, Yuan-Fang Li, Gholamreza Haffari, Guilin Qi, Tongtong Wu
Our method achieves state-of-the-art performance on the CQA dataset (Saha et al., 2018) while using only five trial trajectories for the top-5 retrieved questions in each support set, and metatraining on tasks constructed from only 1% of the training set.
Knowledge Base Question Answering
Meta Reinforcement Learning
+3
1 code implementation • 29 Oct 2020 • Yuncheng Hua, Yuan-Fang Li, Gholamreza Haffari, Guilin Qi, Wei Wu
However, this comes at the cost of manually labeling similar questions to learn a retrieval model, which is tedious and expensive.
1 code implementation • 29 Oct 2020 • Yuncheng Hua, Yuan-Fang Li, Guilin Qi, Wei Wu, Jingyao Zhang, Daiqing Qi
Our framework consists of a neural generator and a symbolic executor that, respectively, transforms a natural-language question into a sequence of primitive actions, and executes them over the knowledge base to compute the answer.
no code implementations • COLING 2020 • Xiao-Yu Guo, Yuan-Fang Li, Gholamreza Haffari
A prominent approach to this task is based on the programmer-interpreter framework, where the programmer maps the question into a sequence of reasoning actions which is then executed on the raw text by the interpreter.
no code implementations • COLING 2020 • Sheng Bi, Xiya Cheng, Yuan-Fang Li, Yongzhen Wang, Guilin Qi
Question generation over knowledge bases (KBQG) aims at generating natural-language questions about a subgraph, i. e. a set of (connected) triples.
1 code implementation • 1 Sep 2020 • Sarkar Snigdha Sarathi Das, Mohammed Eunus Ali, Yuan-Fang Li, Yong-Bin Kang, Timos Sellis
Extensive experiments with a large number of regression techniques show that the embeddings produced by our proposed GSNE technique consistently and significantly improve the performance of the house price prediction task regardless of the downstream regression model.
no code implementations • 13 Jun 2020 • Tao He, Lianli Gao, Jingkuan Song, Jianfei Cai, Yuan-Fang Li
Despite the huge progress in scene graph generation in recent years, its long-tail distribution in object relationships remains a challenging and pestering issue.
1 code implementation • 20 May 2020 • Zhipeng Gao, Xin Xia, John Grundy, David Lo, Yuan-Fang Li
Stack Overflow has been heavily used by software developers as a popular way to seek programming-related information from peers via the internet.
Software Engineering
1 code implementation • 8 Dec 2019 • Bhagya Hettige, Yuan-Fang Li, Weiqing Wang, Suong Le, Wray Buntine
To address these limitations, we present $\mathtt{MedGraph}$, a supervised EMR embedding method that captures two types of information: (1) the visit-code associations in an attributed bipartite graph, and (2) the temporal sequencing of visits through a point process.
1 code implementation • 2 Dec 2019 • Bhagya Hettige, Yuan-Fang Li, Weiqing Wang, Wray Buntine
Graph embedding methods transform high-dimensional and complex graph contents into low-dimensional representations.
Ranked #1 on
Link Prediction
on Cora (nonstandard variant)
no code implementations • 8 Nov 2019 • Vishwajeet Kumar, Raktim Chaki, Sai Teja Talluri, Ganesh Ramakrishnan, Yuan-Fang Li, Gholamreza Haffari
Specifically, we propose (a) a novel hierarchical BiLSTM model with selective attention and (b) a novel hierarchical Transformer architecture, both of which learn hierarchical representations of paragraphs.
no code implementations • CONLL 2019 • Vishwajeet Kumar, Ganesh Ramakrishnan, Yuan-Fang Li
The \textit{generator} is a sequence-to-sequence model that incorporates the \textit{structure} and \textit{semantics} of the question being generated.
no code implementations • IJCNLP 2019 • Vishwajeet Kumar, Sivaanandh Muneeswaran, Ganesh Ramakrishnan, Yuan-Fang Li
Generating syntactically and semantically valid and relevant questions from paragraphs is useful with many applications.
no code implementations • 2 Aug 2019 • Ying Yang, Michael Wybrow, Yuan-Fang Li, Tobias Czauderna, Yongqun He
Ontologies are formal representations of concepts and complex relationships among them.
1 code implementation • 1 Jul 2019 • Tao He, Yuan-Fang Li, Lianli Gao, Dongxiang Zhang, Jingkuan Song
We evaluate our framework on {four} public benchmark datasets, all of which show that our method is superior to the other state-of-the-art methods on the tasks of object recognition and image retrieval.
1 code implementation • 2 Jan 2019 • Wei Chen, Jincai Chen, Fuhao Zou, Yuan-Fang Li, Ping Lu, Qiang Wang, Wei Zhao
The inverted index structure is amenable to GPU-based implementations, and the state-of-the-art systems such as Faiss are able to exploit the massive parallelism offered by GPUs.
no code implementations • 15 Aug 2018 • Vishwajeet Kumar, Ganesh Ramakrishnan, Yuan-Fang Li
The {\it generator} is a sequence-to-sequence model that incorporates the {\it structure} and {\it semantics} of the question being generated.
no code implementations • 7 Mar 2018 • Vishwajeet Kumar, Kireeti Boorla, Yogesh Meena, Ganesh Ramakrishnan, Yuan-Fang Li
Neural network-based methods represent the state-of-the-art in question generation from text.
no code implementations • 1 Jun 2017 • Yuan-Fang Li, Ardavan Pedram
Our results suggest that smaller networks favor non-batched techniques while performance for larger networks is higher using batched operations.