no code implementations • EMNLP 2020 • Wenqiang Lei, Weixin Wang, Zhixin Ma, Tian Gan, Wei Lu, Min-Yen Kan, Tat-Seng Chua
By providing a schema linking corpus based on the Spider text-to-SQL dataset, we systematically study the role of schema linking.
1 code implementation • EMNLP 2020 • Liying Cheng, Lidong Bing, Qian Yu, Wei Lu, Luo Si
Peer review and rebuttal, with rich interactions and argumentative discussions in between, are naturally a good resource to mine arguments.
Ranked #3 on
Argument Pair Extraction (APE)
on RR
no code implementations • WOSP 2020 • Chenrui Guo, Haoran Cui, Li Zhang, Jiamin Wang, Wei Lu, Jian Wu
The tool is built on a Support Vector Machine (SVM) model trained on a set of 7, 058 manually annotated citation context sentences, curated from 34, 000 papers from the ACL Anthology.
1 code implementation • 27 Nov 2023 • Sicong Leng, Yang Zhou, Mohammed Haroon Dupty, Wee Sun Lee, Sam Conrad Joyce, Wei Lu
We make multiple contributions to initiate research on this task.
no code implementations • 15 Nov 2023 • Zhikai Xue, Guoxiu He, Zhuoren Jiang, Yangyang Kang, Star Zhao, Wei Lu
In this study, we propose a novel graph neural network to Disentangle the Potential impacts of Papers into Diffusion, Conformity, and Contribution values (called DPPDCC).
no code implementations • 26 Oct 2023 • Ding Zou, Wei Lu, Zhibo Zhu, Xingyu Lu, Jun Zhou, Xiaojin Wang, KangYu Liu, Haiqing Wang, Kefan Wang, Renen Sun
The reactive module provides a self-tuning estimator of CPU utilization to the optimization model.
1 code implementation • 25 Oct 2023 • Xiaobing Sun, Jiaxi Li, Wei Lu
The underlying mechanism of neural networks in capturing precise knowledge has been the subject of consistent research efforts.
1 code implementation • 20 Oct 2023 • Haoran Li, Yiran Liu, Xingxing Zhang, Wei Lu, Furu Wei
Furthermore, we apply probabilistic ranking and contextual ranking sequentially to the instruction-tuned LLM.
no code implementations • 19 Oct 2023 • Xiang Shi, Jiawei Liu, Yinpeng Liu, Qikai Cheng, Wei Lu
The advent of Large Language Models (LLMs) has shown the potential to improve relevance and provide direct answers in web searches.
1 code implementation • 16 Oct 2023 • Yao Xiao, Lu Xu, Jiaxi Li, Wei Lu, XiaoLi Li
While prompt tuning approaches have achieved competitive performance with high efficiency, we observe that they invariably employ the same initialization process, wherein the soft prompt is either randomly initialized or derived from an existing embedding vocabulary.
1 code implementation • 20 Sep 2023 • Wenhang Shi, Yiren Chen, Zhe Zhao, Wei Lu, Kimmo Yan, Xiaoyong Du
Therefore, we shift the attention to the current task learning stage, presenting a novel framework, C&F (Create and Find Flatness), which builds a flat training space for each task in advance.
1 code implementation • 2 Jun 2023 • Tianduo Wang, Wei Lu
Mathematical reasoning is regarded as a necessary ability for Language Models (LMs).
Ranked #1 on
Math Word Problem Solving
on MAWPS
1 code implementation • IEEE Transactions on Geoscience and Remote Sensing 2023 • Wei Lu, Si-Bao Chen, Jin Tang, Chris H. Q. Ding, and Bin Luo
To address this problem, we propose a new and universal downsampling module named robust feature downsampling (RFD).
1 code implementation • 1 Jun 2023 • Jiaxi Li, Wei Lu
To leverage this knowledge, we propose a novel chart-based method for extracting parse trees from masked language models (LMs) without the need to train separate parsers.
1 code implementation • 29 May 2023 • Zhanming Jie, Wei Lu
To address these issues, we investigate two approaches to leverage the training data in a few-shot prompting scenario: dynamic program prompting and program distillation.
1 code implementation • 28 May 2023 • Ziqi Jin, Wei Lu
The chain-of-though (CoT) prompting methods were successful in various natural language processing (NLP) tasks thanks to their ability to unveil the underlying complex reasoning processes.
1 code implementation • 28 May 2023 • Guangtao Zeng, Peiyuan Zhang, Wei Lu
Fine-tuning pre-trained language models for multiple tasks tends to be expensive in terms of storage.
1 code implementation • 22 May 2023 • Lu Xu, Lidong Bing, Wei Lu
Distantly supervised named entity recognition (DS-NER) has been proposed to exploit the automatically labeled training data instead of human annotations.
no code implementations • 7 May 2023 • Wei Lu, Hua Ma, Tien-Ping Tan
Emotion recognition using Electroencephalogram (EEG) signals has emerged as a significant research challenge in affective computing and intelligent interaction.
no code implementations • 5 May 2023 • Jiawei Liu, Zi Xiong, Yi Jiang, Yongqiang Ma, Wei Lu, Yong Huang, Qikai Cheng
Inspired by recent advancement in prompt learning, in this paper, we propose the Mix Prompt Tuning (MPT), which is a semi-supervised method to alleviate the dependence on annotated data and improve the performance of multi-granularity academic function recognition tasks with a small number of labeled examples.
1 code implementation • 16 Apr 2023 • Guoxiu He, Zhikai Xue, Zhuoren Jiang, Yangyang Kang, Star Zhao, Wei Lu
Then, a novel graph neural network, Hierarchical and Heterogeneous Contrastive Graph Learning Model (H2CGL), is proposed to incorporate heterogeneity and dynamics of the citation network.
no code implementations • 11 Apr 2023 • Wei Lu, Nic A. Lee, Markus J. Buehler
Spider webs are incredible biological structures, comprising thin but strong silk filament and arranged into complex hierarchical architectures with striking mechanical properties (e. g., lightweight but high strength, achieving diverse mechanical responses).
no code implementations • 3 Apr 2023 • Antoine Godichon-Baggioni, Wei Lu
In the context of large samples, a small number of individuals might spoil basic statistical indicators like the mean.
1 code implementation • CVPR 2023 • ZiCheng Zhang, Wei Wu, Wei Sun, Dangyang Tu, Wei Lu, Xiongkuo Min, Ying Chen, Guangtao Zhai
User-generated content (UGC) live videos are often bothered by various distortions during capture procedures and thus exhibit diverse visual qualities.
no code implementations • 21 Feb 2023 • Fred X. Han, Keith G. Mills, Fabian Chudak, Parsa Riahi, Mohammad Salameh, Jialin Zhang, Wei Lu, Shangling Jui, Di Niu
In this paper, we propose a general-purpose neural predictor for NAS that can transfer across search spaces, by representing any given candidate Convolutional Neural Network (CNN) with a Computation Graph (CG) that consists of primitive operators.
no code implementations • 17 Feb 2023 • ZiCheng Zhang, Wei Sun, Yingjie Zhou, Wei Lu, Yucheng Zhu, Xiongkuo Min, Guangtao Zhai
Currently, great numbers of efforts have been put into improving the effectiveness of 3D model quality assessment (3DQA) methods.
no code implementations • 24 Jan 2023 • Yongqiang Ma, Jiawei Liu, Fan Yi, Qikai Cheng, Yong Huang, Wei Lu, Xiaozhong Liu
We find that there exists a "writing style" gap between AI-generated scientific text and human-written scientific text.
1 code implementation • 24 Dec 2022 • ZiCheng Zhang, Yingjie Zhou, Wei Sun, Wei Lu, Xiongkuo Min, Yu Wang, Guangtao Zhai
In recent years, large amounts of effort have been put into pushing forward the real-world application of dynamic digital human (DDH).
1 code implementation • 3 Dec 2022 • Xinyu Wang, Jiong Cai, Yong Jiang, Pengjun Xie, Kewei Tu, Wei Lu
MoRe contains a text retrieval module and an image-based retrieval module, which retrieve related knowledge of the input text and image in the knowledge corpus respectively.
Ranked #1 on
Multi-modal Named Entity Recognition
on SNAP (MNER)
Multi-modal Named Entity Recognition
Named Entity Recognition
+3
1 code implementation • 30 Nov 2022 • Keith G. Mills, Di Niu, Mohammad Salameh, Weichen Qiu, Fred X. Han, Puyuan Liu, Jialin Zhang, Wei Lu, Shangling Jui
Evaluating neural network performance is critical to deep neural network design but a costly procedure.
1 code implementation • 30 Nov 2022 • Keith G. Mills, Fred X. Han, Jialin Zhang, Fabian Chudak, Ali Safari Mamaghani, Mohammad Salameh, Wei Lu, Shangling Jui, Di Niu
In this paper, we propose GENNAPE, a Generalized Neural Architecture Performance Estimator, which is pretrained on open neural architecture benchmarks, and aims to generalize to completely unseen architectures through combined innovations in network representation, contrastive pretraining, and fuzzy clustering-based predictor ensemble.
1 code implementation • 29 Oct 2022 • Tianduo Wang, Wei Lu
Fine-tuning a pre-trained language model via the contrastive learning framework with a large amount of unlabeled sentences or labeled sentence pairs is a common way to obtain high-quality sentence representations.
1 code implementation • 25 Oct 2022 • Peiyuan Zhang, Wei Lu
Our experiments show that our approach is able to lead to improved class representations, yielding significantly better results on the few-shot relation extraction task.
1 code implementation • 23 Oct 2022 • Guangtao Zeng, Wei Lu
Training a good deep learning model requires substantial data and computing resources, which makes the resulting neural model a valuable intellectual property.
1 code implementation • 22 Oct 2022 • Jiale Han, Shuai Zhao, Bo Cheng, Shengkun Ma, Wei Lu
Current prompt tuning methods mostly convert the downstream tasks to masked language modeling problems by adding cloze-style phrases and mapping all labels to verbalizations with fixed length, which has proven effective for tasks with simple label spaces.
Ranked #2 on
Relation Extraction
on Re-TACRED
1 code implementation • 14 Sep 2022 • Jiawei Liu, Yangyang Kang, Di Tang, Kaisong Song, Changlong Sun, XiaoFeng Wang, Wei Lu, Xiaozhong Liu
In this study, we propose an imitation adversarial attack on black-box neural passage ranking models.
no code implementations • 10 Jun 2022 • Tao Wang, ZiCheng Zhang, Wei Sun, Xiongkuo Min, Wei Lu, Guangtao Zhai
However, limited work has been put forward to tackle the problem of computer graphics generated images' quality assessment (CG-IQA).
no code implementations • 9 Jun 2022 • Yu Fan, ZiCheng Zhang, Wei Sun, Xiongkuo Min, Wei Lu, Tao Wang, Ning Liu, Guangtao Zhai
Point cloud is one of the most widely used digital formats of 3D models, the visual quality of which is quite sensitive to distortions such as downsampling, noise, and compression.
no code implementations • 9 Jun 2022 • Wei Lu, Wei Sun, Xiongkuo Min, Wenhan Zhu, Quan Zhou, Jun He, Qiyuan Wang, ZiCheng Zhang, Tao Wang, Guangtao Zhai
In this paper, we propose a deep learning-based BIQA model for 4K content, which on one hand can recognize true and pseudo 4K content and on the other hand can evaluate their perceptual visual quality.
no code implementations • 9 Jun 2022 • ZiCheng Zhang, Wei Sun, Xiongkuo Min, Wenhan Zhu, Tao Wang, Wei Lu, Guangtao Zhai
Therefore, in this paper, we propose a no-reference deep-learning image quality assessment method based on frequency maps because the artifacts caused by SISR algorithms are quite sensitive to frequency information.
no code implementations • 9 Jun 2022 • Wei Lu, Wei Sun, Wenhan Zhu, Xiongkuo Min, ZiCheng Zhang, Tao Wang, Guangtao Zhai
In this paper, we first conduct an example experiment (i. e. the face detection task) to demonstrate that the quality of the SIs has a crucial impact on the performance of the IVSS, and then propose a saliency-based deep neural network for the blind quality assessment of the SIs, which helps IVSS to filter the low-quality SIs and improve the detection and recognition performance.
no code implementations • 28 May 2022 • Kai Hu, Yu Liu, Renhe Liu, Wei Lu, Gang Yu, Bin Fu
In the asymmetric codec, we adopt a mixed multi-path residual block (MMRB) to gradually extract weak texture features of input images, which can better preserve the original facial features and avoid excessive fantasy.
no code implementations • 21 May 2022 • Zechen Liang, Yuan-Gen Wang, Wei Lu, Xiaochun Cao
Semi-Supervised Learning (SSL) has advanced classification tasks by inputting both labeled and unlabeled data to train a model jointly.
1 code implementation • NAACL 2022 • Xiaobing Sun, Wei Lu
Although self-attention based models such as Transformers have achieved remarkable successes on natural language processing (NLP) tasks, recent studies reveal that they have limitations on modeling sequential transformations (Hahn, 2020), which may prompt re-examinations of recurrent neural networks (RNNs) that demonstrated impressive results on handling sequential data.
1 code implementation • 29 Apr 2022 • Wei Sun, Xiongkuo Min, Wei Lu, Guangtao Zhai
The proposed model utilizes very sparse frames to extract spatial features and dense frames (i. e. the video chunk) with a very low spatial resolution to extract motion features, which thereby has low computational complexity.
Ranked #4 on
Video Quality Assessment
on YouTube-UGC
1 code implementation • ACL 2022 • Zhanming Jie, Jierui Li, Wei Lu
Solving math word problems requires deductive reasoning over the quantities in the text.
Ranked #4 on
Math Word Problem Solving
on MathQA
1 code implementation • SemEval (NAACL) 2022 • Xinyu Wang, Yongliang Shen, Jiong Cai, Tao Wang, Xiaobin Wang, Pengjun Xie, Fei Huang, Weiming Lu, Yueting Zhuang, Kewei Tu, Wei Lu, Yong Jiang
Our system wins 10 out of 13 tracks in the MultiCoNER shared task.
Multilingual Named Entity Recognition
Named Entity Recognition
no code implementations • ICLR 2022 • Wei Lu
However, very few provable guarantees have been available for the performance of graph neural network models.
no code implementations • 29 Sep 2021 • Fred X. Han, Fabian Chudak, Keith G Mills, Mohammad Salameh, Parsa Riahi, Jialin Zhang, Wei Lu, Shangling Jui, Di Niu
Understanding and modelling the performance of neural architectures is key to Neural Architecture Search (NAS).
1 code implementation • 25 Sep 2021 • Keith G. Mills, Fred X. Han, Jialin Zhang, SEYED SAEED CHANGIZ REZAEI, Fabian Chudak, Wei Lu, Shuo Lian, Shangling Jui, Di Niu
Neural architecture search automates neural network design and has achieved state-of-the-art results in many deep learning applications.
no code implementations • 25 Sep 2021 • Keith G. Mills, Fred X. Han, Mohammad Salameh, SEYED SAEED CHANGIZ REZAEI, Linglong Kong, Wei Lu, Shuo Lian, Shangling Jui, Di Niu
In this paper, we propose L$^{2}$NAS, which learns to intelligently optimize and update architecture hyperparameters via an actor neural network based on the distribution of high-performing architectures in the search history.
1 code implementation • EMNLP 2021 • Jiawei Liu, Kaisong Song, Yangyang Kang, Guoxiu He, Zhuoren Jiang, Changlong Sun, Wei Lu, Xiaozhong Liu
Chatbot is increasingly thriving in different domains, however, because of unexpected discourse complexity and training data sparseness, its potential distrust hatches vital apprehension.
1 code implementation • EMNLP 2021 • Yuxiang Zhou, Lejian Liao, Yang Gao, Zhanming Jie, Wei Lu
Dependency parse trees are helpful for discovering the opinion words in aspect-based sentiment analysis (ABSA).
1 code implementation • EMNLP 2021 • Jiale Han, Bo Cheng, Wei Lu
Few-shot relation extraction (FSRE) focuses on recognizing novel relations by learning with merely a handful of annotated instances.
1 code implementation • 11 Sep 2021 • Guoshun Nan, Guoqing Luo, Sicong Leng, Yao Xiao, Wei Lu
Dialogue-based relation extraction (DiaRE) aims to detect the structural information from unstructured utterances in dialogues.
1 code implementation • EMNLP 2021 • Guoshun Nan, Jiaqi Zeng, Rui Qiao, Zhijiang Guo, Wei Lu
Information Extraction (IE) aims to extract structural information from unstructured texts.
2 code implementations • 5 Jul 2021 • ZiCheng Zhang, Wei Sun, Xiongkuo Min, Tao Wang, Wei Lu, Guangtao Zhai
Therefore, many related studies such as point cloud quality assessment (PCQA) and mesh quality assessment (MQA) have been carried out to measure the visual quality degradations of 3D models.
Ranked #3 on
Point Cloud Quality Assessment
on WPC
1 code implementation • 30 Jun 2021 • Haoran Li, Wei Lu
In neural machine translation, cross entropy (CE) is the standard loss function in two training methods of auto-regressive models, i. e., teacher forcing and scheduled sampling.
no code implementations • 24 Jun 2021 • Wei Lu, Lingyi Liu, Junwei Luo, Xianfeng Zhao, Yicong Zhou, Jiwu Huang
And a spatial-temporal model is proposed which has two components for capturing spatial and temporal forgery traces in global perspective respectively.
1 code implementation • CVPR 2021 • Guoshun Nan, Rui Qiao, Yao Xiao, Jun Liu, Sicong Leng, Hao Zhang, Wei Lu
2) Meanwhile, we introduce a dual contrastive learning approach (DCL) to better align the text and video by maximizing the mutual information (MI) between query and video clips, and the MI between start/end frames of a target moment and the others within a video to learn more informative visual representations.
no code implementations • 19 May 2021 • SEYED SAEED CHANGIZ REZAEI, Fred X. Han, Di Niu, Mohammad Salameh, Keith Mills, Shuo Lian, Wei Lu, Shangling Jui
Despite the empirical success of neural architecture search (NAS) in deep learning applications, the optimality, reproducibility and cost of NAS schemes remain hard to assess.
no code implementations • 8 May 2021 • Zhishun Wang, Wei Lu, Kaixin Zhang, TianHao Li, Zixi Zhao
It is a difficult task for both professional investors and individual traders continuously making profit in stock market.
1 code implementation • NAACL 2021 • Lu Xu, Zhanming Jie, Wei Lu, Lidong Bing
We believe this is because both types of features - the contextual information captured by the linear sequences and the structured information captured by the dependency trees may complement each other.
no code implementations • 1 Jan 2021 • Xiaobing Sun, Wei Lu
Based on the closed-form approximations of the hidden states, we argue that the effectiveness of the cells may be attributed to a type of sequence-level representations brought in by the gating mechanism, which enables the cells to encode sequence-level features along with token-level features.
no code implementations • 1 Jan 2021 • Guoshun Nan, Jiaqi Zeng, Rui Qiao, Wei Lu
However, in practice, the long-tailed and imbalanced data may lead to severe bias issues for deep learning models, due to very few training instances available for the tail classes.
no code implementations • 1 Jan 2021 • Thilini Cooray, Ngai-Man Cheung, Wei Lu
Our work is the first to study disentanglement learning for graph-level representations.
2 code implementations • 14 Dec 2020 • Jiawei Liu, Zhe Gao, Yangyang Kang, Zhuoren Jiang, Guoxiu He, Changlong Sun, Xiaozhong Liu, Wei Lu
Is chatbot able to completely replace the human agent?
1 code implementation • EMNLP 2020 • Yan Zhang, Zhijiang Guo, Zhiyang Teng, Wei Lu, Shay B. Cohen, Zuozhu Liu, Lidong Bing
With the help of these strategies, we are able to train a model with fewer parameters while maintaining the model capacity.
2 code implementations • EMNLP 2020 • Jue Wang, Wei Lu
In this work, we propose the novel {\em table-sequence encoders} where two different encoders -- a table encoder and a sequence encoder are designed to help each other in the representation learning process.
Ranked #2 on
Zero-shot Relation Triplet Extraction
on Wiki-ZSL
Joint Entity and Relation Extraction
named-entity-recognition
+4
1 code implementation • EMNLP 2020 • Lu Xu, Lidong Bing, Wei Lu, Fei Huang
Such a design allows the model to extract aspect-specific opinion spans and then evaluate sentiment polarity by exploiting the extracted opinion features.
4 code implementations • EMNLP 2020 • Lu Xu, Hao Li, Wei Lu, Lidong Bing
Our observation is that the three elements within a triplet are highly related to each other, and this motivates us to build a joint model to extract such triplets using a sequence tagging approach.
Ranked #3 on
Aspect Sentiment Triplet Extraction
on SemEval
no code implementations • 8 Aug 2020 • Ziang Ma, Linyuan Wang, HaiTao Zhang, Wei Lu, Jun Yin
While remarkable progress has been made in robust visual tracking, accurate target state estimation still remains a highly challenging problem.
Ranked #11 on
Semi-Supervised Video Object Segmentation
on VOT2020
no code implementations • ACL 2020 • Xiaobing Sun, Wei Lu
Attention has been proven successful in many natural language processing (NLP) tasks.
2 code implementations • ACL 2020 • Guoshun Nan, Zhijiang Guo, Ivan Sekulić, Wei Lu
Document-level relation extraction requires integrating information within and across multiple sentences of a document and capturing complex interactions between inter-sentence entities.
Ranked #9 on
Relation Extraction
on GDA
no code implementations • LREC 2020 • Chuan Wu, Evangelos Kanoulas, Maarten de Rijke, Wei Lu
To support research on entity salience, we present a new dataset, the WikiNews Salience dataset (WN-Salience), which can be used to benchmark tasks such as entity salience detection and salient entity linking.
1 code implementation • EMNLP 2020 • Liying Cheng, Dekun Wu, Lidong Bing, Yan Zhang, Zhanming Jie, Wei Lu, Luo Si
Previous works on knowledge-to-text generation take as input a few RDF triples or key-value pairs conveying the knowledge of some entities to generate a natural language description.
Ranked #1 on
KG-to-Text Generation
on ENT-DESC
no code implementations • EMNLP 2020 • Yanyan Zou, Xingxing Zhang, Wei Lu, Furu Wei, Ming Zhou
The main idea is that, given an input text artificially constructed from a document, a model is pre-trained to reinstate the original document.
2 code implementations • 18 Feb 2020 • Xiao Shen, Quanyu Dai, Fu-Lai Chung, Wei Lu, Kup-Sze Choi
This motivates us to propose an adversarial cross-network deep network embedding (ACDNE) model to integrate adversarial domain adaptation with deep network embedding so as to learn network-invariant node representations that can also well preserve the network structural information.
no code implementations • 8 Feb 2020 • Yanyan Zou, Wei Lu, Xu sun
In this paper, we propose a new task of mining commonsense facts from the raw text that describes the physical world.
1 code implementation • 4 Feb 2020 • Changyu Deng, Yizhou Wang, Can Qin, Yun Fu, Wei Lu
A small number of training data is generated dynamically based on the DNN's prediction of the optimum.
no code implementations • 3 Jan 2020 • Guoxiu He, Zhe Gao, Zhuoren Jiang, Yangyang Kang, Changlong Sun, Xiaozhong Liu, Wei Lu
The nonliteral interpretation of a text is hard to be understood by machine models due to its high context-sensitivity and heavy usage of figurative language.
6 code implementations • 5 Nov 2019 • Haiyun Peng, Lu Xu, Lidong Bing, Fei Huang, Wei Lu, Luo Si
In this paper, we introduce a new subtask under ABSA, named aspect sentiment triplet extraction (ASTE).
Ranked #5 on
Aspect Sentiment Triplet Extraction
on SemEval
no code implementations • IJCNLP 2019 • Yanyan Zou, Wei Lu
We propose Text2Math, a model for semantically parsing text into math expressions.
1 code implementation • IJCNLP 2019 • Hsiu-Wei Yang, Yanyan Zou, Peng Shi, Wei Lu, Jimmy Lin, Xu sun
Multilingual knowledge graphs (KGs), such as YAGO and DBpedia, represent entities in different languages.
1 code implementation • IJCNLP 2019 • Zhanming Jie, Wei Lu
Dependency tree structures capture long-distance and syntactic relationships between words in a sentence.
Ranked #1 on
Chinese Named Entity Recognition
on OntoNotes 5.0
Chinese Named Entity Recognition
named-entity-recognition
+1
no code implementations • IJCNLP 2019 • Hao Li, Wei Lu
In this work, we argue that both types of information (implicit and explicit structural information) are crucial for building a successful targeted sentiment analysis model.
1 code implementation • IJCNLP 2019 • Zhe Zhao, Hui Chen, Jinbin Zhang, Xin Zhao, Tao Liu, Wei Lu, Xi Chen, Haotang Deng, Qi Ju, Xiaoyong Du
Existing works, including ELMO and BERT, have revealed the importance of pre-training for NLP tasks.
1 code implementation • IJCNLP 2019 • Bailin Wang, Wei Lu
In medical documents, it is possible that an entity of interest not only contains a discontiguous sequence of words but also overlaps with another entity.
1 code implementation • NAACL 2019 • Yanyan Zou, Wei Lu
A pun is a form of wordplay for an intended humorous or rhetorical effect, where a word suggests two or more meanings by exploiting polysemy (homographic pun) or phonological similarity to another word (heterographic pun).
1 code implementation • ACL 2019 • Yanyan Zou, Wei Lu
An arithmetic word problem typically includes a textual description containing several constant quantities.
no code implementations • 22 Aug 2019 • Guoliang Feng, Wei Lu, Witold Pedrycz, Jianhua Yang, Xiaodong Liu
Index Terms-Fuzzy cognitive maps (FCMs), maximum entropy, noisy data, rapid and robust learning.
1 code implementation • TACL 2019 • Zhijiang Guo, Yan Zhang, Zhiyang Teng, Wei Lu
We focus on graph-to-sequence learning, which can be framed as transducing graph structures to sequences for text generation.
1 code implementation • ACL 2019 • Jiaqi Pan, Rishabh Bhardwaj, Wei Lu, Hai Leong Chieu, Xinghao Pan, Ni Yi Puay
In this paper, we investigate the importance of social network information compared to content information in the prediction of a Twitter user{'}s occupational class.
1 code implementation • ACL 2019 • Ruixue Ding, Pengjun Xie, Xiaoyan Zhang, Wei Lu, Linlin Li, Luo Si
Gazetteers were shown to be useful resources for named entity recognition (NER).
2 code implementations • ACL 2019 • Zhijiang Guo, Yan Zhang, Wei Lu
Dependency trees convey rich structural information that is proven useful for extracting relations among entities in text.
Ranked #25 on
Relation Extraction
on TACRED
no code implementations • NAACL 2019 • Hao Li, Wei Lu, Pengjun Xie, Linlin Li
This paper introduces a new task {--} Chinese address parsing {--} the task of mapping Chinese addresses into semantically meaningful chunks.
no code implementations • NAACL 2019 • Zhanming Jie, Pengjun Xie, Wei Lu, Ruixue Ding, Linlin Li
Supervised approaches to named entity recognition (NER) are largely developed based on the assumption that the training data is fully annotated with named entity information.
no code implementations • EMNLP 2017 • Wei Yang, Wei Lu, Vincent W. Zheng
Learning word embeddings has received a significant amount of attention recently.
1 code implementation • EMNLP 2017 • Aldrian Obaja Muis, Wei Lu
We present some theoretical analysis on the differences between our model and a recently proposed model for recognizing overlapping mentions, and discuss the possible implications of the differences.
1 code implementation • 19 Oct 2018 • Zhanming Jie, Aldrian Obaja Muis, Wei Lu
It has been shown previously that such information can be used to improve the performance of NER (Sasano and Kurohashi 2008, Ling and Weld 2012).
no code implementations • 19 Oct 2018 • Aldrian Obaja Muis, Wei Lu
This paper introduces a new annotated corpus based on an existing informal text corpus: the NUS SMS Corpus (Chen and Kan, 2013).
1 code implementation • EMNLP 2016 • Aldrian Obaja Muis, Wei Lu
This paper focuses on the study of recognizing discontiguous entities.
1 code implementation • EMNLP 2018 • Bill Yuchen Lin, Wei Lu
Recent research efforts have shown that neural architectures can be effective in conventional information extraction tasks such as named entity recognition, yielding state-of-the-art results on standard newswire datasets.
1 code implementation • EMNLP 2018 • Bailin Wang, Wei Lu, Yu Wang, Hongxia Jin
It is common that entity mentions can contain other mentions recursively.
Ranked #6 on
Nested Named Entity Recognition
on NNE
1 code implementation • EMNLP 2018 • Bailin Wang, Wei Lu
In this work, we propose a novel segmental hypergraph representation to model overlapping entity mentions that are prevalent in many practical datasets.
Ranked #5 on
Nested Named Entity Recognition
on NNE
Nested Mention Recognition
Nested Named Entity Recognition
+1
no code implementations • EMNLP 2018 • Zhijiang Guo, Wei Lu
This paper introduces a simple yet effective transition-based system for Abstract Meaning Representation (AMR) parsing.
no code implementations • EMNLP 2018 • Zhanming Jie, Wei Lu
We propose a novel dependency-based hybrid tree model for semantic parsing, which converts natural language utterance into machine interpretable meaning representations.
no code implementations • 24 Aug 2018 • Sadegh Riyahi, Wookjin Choi, Chia-Ju Liu, Saad Nadeem, Shan Tan, Hualiang Zhong, Wengen Chen, Abraham J. Wu, James G. Mechalakos, Joseph O. Deasy, Wei Lu
Quantification of local metabolic tumor volume (MTV) chan-ges after Chemo-radiotherapy would allow accurate tumor response evaluation.
1 code implementation • 24 Aug 2018 • Wookjin Choi, Saad Nadeem, Sadegh Riyahi, Joseph O. Deasy, Allen Tannenbaum, Wei Lu
The spiculation quantification measures was then applied to the radiomics framework for pathological malignancy prediction with reproducible semi-automatic segmentation of nodule.
no code implementations • ACL 2018 • Hao Li, Wei Lu
We report an empirical study on the task of negation scope extraction given the negation cue.
no code implementations • ACL 2018 • Yanyan Zou, Wei Lu
In this work, we present a study to show how learning distributed representations of the logical forms from data annotated in different languages can be used for improving the performance of a monolingual semantic parser.
no code implementations • SEMEVAL 2018 • Ph, Peter i, Amila Silva, Wei Lu
This paper describes the SemEval 2018 shared task on semantic extraction from cybersecurity reports, which is introduced for the first time as a shared task on SemEval.
1 code implementation • AAAI-18 2018 • Bailin Wang, Wei Lu
Aspect-level sentiment classification aims at detecting the sentiment expressed towards a particular target in a sentence.
1 code implementation • ICLR 2018 • Xiang Wei, Boqing Gong, Zixia Liu, Wei Lu, Liqiang Wang
Despite being impactful on a variety of problems and applications, the generative adversarial nets (GANs) are remarkably difficult to train.
no code implementations • EMNLP 2017 • Wei Lu
Based on such a framework, we show how some seemingly complicated structured prediction models such as a semantic parsing model (Lu et al., 2008; Lu, 2014) can be implemented conveniently and quickly.
no code implementations • 28 Aug 2017 • Ruxin Wang, Wei Lu, Shijun Xiang, Xianfeng Zhao, Jinwei Wang
In this paper, a color image splicing detection approach is proposed based on Markov transition probability of quaternion component separation in quaternion discrete cosine transform (QDCT) domain and quaternion wavelet transform (QWT) domain.
no code implementations • ACL 2017 • Swee Kiat Lim, Aldrian Obaja Muis, Wei Lu, Chen Hui Ong
Cybersecurity risks and malware threats are becoming increasingly dangerous and common.
1 code implementation • ACL 2017 • Hesam Amoualian, Wei Lu, Eric Gaussier, Georgios Balikas, Massih R. Amini, Marianne Clausel
This paper presents an LDA-based model that generates topically coherent segments within documents by jointly segmenting documents and assigning topics to their words.
no code implementations • ACL 2017 • Raymond Hendy Susanto, Wei Lu
In this paper, we address semantic parsing in a multilingual context.
2 code implementations • 15 May 2017 • Wei Lu, Xiaokui Xiao, Amit Goyal, Keke Huang, Laks V. S. Lakshmanan
In a recent SIGMOD paper titled "Debunking the Myths of Influence Maximization: An In-Depth Benchmarking Study", Arora et al. [1] undertake a performance benchmarking study of several well-known algorithms for influence maximization.
Social and Information Networks
no code implementations • 5 Apr 2017 • Christina Lioma, Birger Larsen, Wei Lu
Typically, every part in most coherent text has some plausible reason for its presence, some function that it performs to the overall semantics of the text.
no code implementations • Thirtieth AAAI Conference on Artificial Intelligence 2016 • Shaosheng Cao, Wei Lu, Qiongkai Xu
Different from other previous research efforts, we adopt a random surfing model to capture graph structural information directly, instead of using the sampling-based method for generating linear sequences proposed by Perozzi et al. (2014).
2 code implementations • WWW 2015 • Shaosheng Cao, Wei Lu, Qiongkai Xu
In this paper, we present {GraRep}, a novel model for learning vertex representations of weighted graphs.
Ranked #1 on
Node Classification
on 20NEWS
no code implementations • 1 Jul 2015 • Wei Lu, Wei Chen, Laks. V. S. Lakshmanan
We study two natural optimization problems, Self Influence Maximization and Complementary Influence Maximization, in a novel setting with complementary entities.
Social and Information Networks Physics and Society H.2.8
no code implementations • 17 Jun 2014 • Peter F. Schultz, Dylan M. Paiton, Wei Lu, Garrett T. Kenyon
We find, for example, that for 16x16-pixel receptive fields, using eight kernels and a stride of 2 leads to sparse reconstructions of comparable quality as using 512 kernels and a stride of 16 (the nonoverlapping case).