Search Results for author: Qin Lu

Found 37 papers, 2 papers with code

Spatial-Assistant Encoder-Decoder Network for Real Time Semantic Segmentation

1 code implementation19 Sep 2023 Yalun Wang, Shidong Chen, Huicong Bian, Weixiao Li, Qin Lu

To ascertain the effectiveness of our approach, our SANet model achieved competitive results on the real-time CamVid and cityscape datasets.

Real-Time Semantic Segmentation Self-Driving Cars

Towards Artificial General Intelligence (AGI) in the Internet of Things (IoT): Opportunities and Challenges

no code implementations14 Sep 2023 Fei Dou, Jin Ye, Geng Yuan, Qin Lu, Wei Niu, Haijian Sun, Le Guan, Guoyu Lu, Gengchen Mai, Ninghao Liu, Jin Lu, Zhengliang Liu, Zihao Wu, Chenjiao Tan, Shaochen Xu, Xianqiao Wang, Guoming Li, Lilong Chai, Sheng Li, Jin Sun, Hongyue Sun, Yunli Shao, Changying Li, Tianming Liu, WenZhan Song

Artificial General Intelligence (AGI), possessing the capacity to comprehend, learn, and execute tasks with human cognitive abilities, engenders significant anticipation and intrigue across scientific, commercial, and societal arenas.

Decision Making

Recipes for Sequential Pre-training of Multilingual Encoder and Seq2Seq Models

no code implementations14 Jun 2023 Saleh Soltan, Andy Rosenbaum, Tobias Falke, Qin Lu, Anna Rumshisky, Wael Hamza

(2) Conversely, using an encoder to warm-start seq2seq training, we show that by unfreezing the encoder partway through training, we can match task performance of a from-scratch seq2seq model.

Language Modelling Masked Language Modeling

Weighted Ensembles for Active Learning with Adaptivity

no code implementations10 Jun 2022 Konstantinos D. Polyzos, Qin Lu, Georgios B. Giannakis

Labeled data can be expensive to acquire in several application domains, including medical imaging, robotics, and computer vision.

Active Learning

Robust and Adaptive Temporal-Difference Learning Using An Ensemble of Gaussian Processes

no code implementations1 Dec 2021 Qin Lu, Georgios B. Giannakis

Value function approximation is a crucial module for policy evaluation in reinforcement learning when the state space is large or continuous.

Gaussian Processes

Incremental Ensemble Gaussian Processes

no code implementations13 Oct 2021 Qin Lu, Georgios V. Karanikolas, Georgios B. Giannakis

Belonging to the family of Bayesian nonparametrics, Gaussian process (GP) based approaches have well-documented merits not only in learning over a rich class of nonlinear functions, but also in quantifying the associated uncertainty.

Dimensionality Reduction Gaussian Processes

PolyU CBS-Comp at SemEval-2021 Task 1: Lexical Complexity Prediction (LCP)

no code implementations SEMEVAL 2021 Rong Xiang, Jinghang Gu, Emmanuele Chersoni, Wenjie Li, Qin Lu, Chu-Ren Huang

In this contribution, we describe the system presented by the PolyU CBS-Comp Team at the Task 1 of SemEval 2021, where the goal was the estimation of the complexity of words in a given sentence context.

Lexical Complexity Prediction Sentence +1

Sina Mandarin Alphabetical Words:A Web-driven Code-mixing Lexical Resource

no code implementations Asian Chapter of the Association for Computational Linguistics 2020 Rong Xiang, Mingyu Wan, Qi Su, Chu-Ren Huang, Qin Lu

Mandarin Alphabetical Word (MAW) is one indispensable component of Modern Chinese that demonstrates unique code-mixing idiosyncrasies influenced by language exchanges.

Automatic Learning of Modality Exclusivity Norms with Crosslingual Word Embeddings

no code implementations Joint Conference on Lexical and Computational Semantics 2020 Emmanuele Chersoni, Rong Xiang, Qin Lu, Chu-Ren Huang

Our experiments focused on crosslingual word embeddings, in order to predict modality association scores by training on a high-resource language and testing on a low-resource one.

Word Embeddings

Affection Driven Neural Networks for Sentiment Analysis

no code implementations LREC 2020 Rong Xiang, Yunfei Long, Mingyu Wan, Jinghang Gu, Qin Lu, Chu-Ren Huang

Deep neural network models have played a critical role in sentiment analysis with promising results in the recent decade.

Sentiment Analysis

Dual Memory Network Model for Biased Product Review Classification

no code implementations WS 2018 Yunfei Long, Mingyu Ma, Qin Lu, Rong Xiang, Chu-Ren Huang

In this work, we propose a dual user and product memory network (DUPMN) model to learn user profiles and product reviews using separate memory networks.

Classification General Classification +1

Fake News Detection Through Multi-Perspective Speaker Profiles

no code implementations IJCNLP 2017 Yunfei Long, Qin Lu, Rong Xiang, Minglei Li, Chu-Ren Huang

This paper proposes a novel method to incorporate speaker profiles into an attention based LSTM model for fake news detection.

Fake News Detection

A Cognition Based Attention Model for Sentiment Analysis

no code implementations EMNLP 2017 Yunfei Long, Qin Lu, Rong Xiang, Minglei Li, Chu-Ren Huang

Evaluations show the CBA based method outperforms the state-of-the-art local context based attention methods significantly.

Feature Engineering Product Recommendation +1

Leveraging Eventive Information for Better Metaphor Detection and Classification

no code implementations CONLL 2017 I-Hsuan Chen, Yunfei Long, Qin Lu, Chu-Ren Huang

We propose a set of syntactic conditions crucial to event structures to improve the model based on the classification of radical groups.

Classification Clustering +1

Unsupervised Measure of Word Similarity: How to Outperform Co-occurrence and Vector Cosine in VSMs

no code implementations30 Mar 2016 Enrico Santus, Tin-Shing Chiu, Qin Lu, Alessandro Lenci, Chu-Ren Huang

In this paper, we claim that vector cosine, which is generally considered among the most efficient unsupervised measures for identifying word similarity in Vector Space Models, can be outperformed by an unsupervised measure that calculates the extent of the intersection among the most mutually dependent contexts of the target words.

Word Similarity

Nine Features in a Random Forest to Learn Taxonomical Semantic Relations

1 code implementation LREC 2016 Enrico Santus, Alessandro Lenci, Tin-Shing Chiu, Qin Lu, Chu-Ren Huang

When the classification is binary, ROOT9 achieves the following results against the baseline: hypernyms-co-hyponyms 95. 7% vs. 69. 8%, hypernyms-random 91. 8% vs. 64. 1% and co-hyponyms-random 97. 8% vs. 79. 4%.

General Classification

What a Nerd! Beating Students and Vector Cosine in the ESL and TOEFL Datasets

no code implementations LREC 2016 Enrico Santus, Tin-Shing Chiu, Qin Lu, Alessandro Lenci, Chu-Ren Huang

In this paper, we claim that Vector Cosine, which is generally considered one of the most efficient unsupervised measures for identifying word similarity in Vector Space Models, can be outperformed by a completely unsupervised measure that evaluates the extent of the intersection among the most associated contexts of two target words, weighting such intersection according to the rank of the shared contexts in the dependency ranked lists.

Word Similarity

ROOT13: Spotting Hypernyms, Co-Hyponyms and Randoms

no code implementations29 Mar 2016 Enrico Santus, Tin-Shing Chiu, Qin Lu, Alessandro Lenci, Chu-Ren Huang

In this paper, we describe ROOT13, a supervised system for the classification of hypernyms, co-hyponyms and random words.

Classification General Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.