no code implementations • 19 May 2023 • Xi Yang, Hang Li, Qinghua Guo, J. Andrew Zhang, Xiaojing Huang, Zhiqun Cheng
In this work, we study sensing-aided uplink transmission in an integrated sensing and communication (ISAC) vehicular network with the use of orthogonal time frequency space (OTFS) modulation.
no code implementations • 11 Mar 2023 • Xiaoying Zhang, Junpu Chen, Hongning Wang, Hong Xie, Hang Li
Off-policy learning, referring to the procedure of policy optimization with access only to logged feedback data, has shown importance in various real-world applications, such as search engines, recommender systems, and etc.
no code implementations • 3 Mar 2023 • Shuai Ma, Weining Qiao, Youlong Wu, Hang Li, Guangming Shi, Dahua Gao, Yuanming Shi, Shiyin Li, Naofal Al-Dhahir
Instead of broadcasting all extracted features, the semantic encoder extracts the disentangled semantic features, and then only the users' intended semantic features are selected for broadcasting, which can further improve the transmission efficiency.
no code implementations • 27 Feb 2023 • Shuai Ma, Weining Qiao, Youlong Wu, Hang Li, Guangming Shi, Dahua Gao, Yuanming Shi, Shiyin Li, Naofal Al-Dhahir
Furthermore, based on the $\beta $-variational autoencoder ($\beta $-VAE), we propose a practical explainable semantic communication system design, which simultaneously achieves semantic features selection and is robust against semantic channel noise.
no code implementations • 6 Feb 2023 • Chengyi Liu, Wenqi Fan, Yunqing Liu, Jiatong Li, Hang Li, Hui Liu, Jiliang Tang, Qing Li
Given the great success of diffusion models in image generation, increasing efforts have been made to leverage these techniques to advance graph generation in recent years.
no code implementations • 2 Feb 2023 • Lingli He, Jiahui Sun, Yiwei Gao, Bin Li, Yuhang Wang, Yanli Dong, Weidong An, Hang Li, Bei Yang, Yuhan Ge, Xuejun Cai Zhang, Yun Stone Shi, Yan Zhao
Glutamate-gated kainate receptors (KARs) are ubiquitous in the central nervous system of vertebrates, mediate synaptic transmission on post-synapse, and modulate transmitter release on pre-synapse.
1 code implementation • 13 Jan 2023 • Xiaoying Zhang, Hongning Wang, Hang Li
This calls for a fine-grained understanding of a user's preferences over items, where one needs to recognize the user's choice is driven by the quality of the item itself, or the pre-selected attributes of the item.
1 code implementation • 12 Jan 2023 • Xinsong Zhang, Yan Zeng, Jipeng Zhang, Hang Li
One is to stop gradients from the vision-language training when learning the language encoder.
Ranked #3 on
Visual Grounding
on RefCOCO+ test B
no code implementations • 23 Dec 2022 • Hang Li, Jindong Gu, Rajat Koner, Sahand Sharifzadeh, Volker Tresp
In this work, we argue that the best text or caption for a given image is the text which would generate the image which is the most similar to that image.
no code implementations • 21 Dec 2022 • Shuai Ma, Jing Wang, Chun Du, Hang Li, Xiaodong Liu, Youlong Wu, Naofal Al-Dhahir, Shiyin Li
To address this challenge, we propose an alternating optimization algorithm to obtain the transmit beamforming and the PD orientation.
1 code implementation • 21 Dec 2022 • Bevan Koopman, Ahmed Mourad, Hang Li, Anton van der Vegt, Shengyao Zhuang, Simon Gibson, Yash Dang, David Lawrence, Guido Zuccon
On the basis of these needs we release an information retrieval test collection comprising real questions, a large collection of scientific documents split in passages, and ground truth relevance assessments indicating which passages are relevant to each question.
1 code implementation • 18 Dec 2022 • Shuai Wang, Hang Li, Guido Zuccon
One challenge to creating an effective systematic review Boolean query is the selection of effective MeSH Terms to include in the query.
1 code implementation • 22 Nov 2022 • Yan Zeng, Xinsong Zhang, Hang Li, Jiawei Wang, Jipeng Zhang, Wangchunshu Zhou
Moreover, we show that the modular design of X$^2$-VLM results in high transferability for X$^2$-VLM to be utilized in any language or domain.
Ranked #1 on
Cross-Modal Retrieval
on Flickr30k
no code implementations • 17 Nov 2022 • Yuanshun Yao, Chong Wang, Hang Li
The key idea is to train a surrogate model to learn the effect of removing a subset of user history on the recommendation.
1 code implementation • 6 Oct 2022 • Zhaowei Zhu, Yuanshun Yao, Jiankai Sun, Hang Li, Yang Liu
Our theoretical analyses show that directly using proxy models can give a false sense of (un)fairness.
no code implementations • 14 Aug 2022 • Wenyan Liu, Juncheng Wan, Xiaoling Wang, Weinan Zhang, Dell Zhang, Hang Li
In this paper, we investigate fast machine unlearning techniques for recommender systems that can remove the effect of a small amount of training data from the recommendation model without incurring the full cost of retraining.
1 code implementation • 13 Jun 2022 • Hang Li, Qadeer Khan, Volker Tresp, Daniel Cremers
The human brain can be considered to be a graphical structure comprising of tens of billions of biological neurons connected by synapses.
1 code implementation • 3 Jun 2022 • Tong Liu, Yushan Liu, Marcel Hildebrandt, Mitchell Joblin, Hang Li, Volker Tresp
We investigate the calibration of graph neural networks for node classification, study the effect of existing post-processing calibration methods, and analyze the influence of model capacity, graph density, and a new loss function on calibration.
1 code implementation • 16 May 2022 • Fei Huang, Hao Zhou, Yang Liu, Hang Li, Minlie Huang
Non-autoregressive Transformers (NATs) significantly reduce the decoding latency by generating all tokens in parallel.
no code implementations • 12 May 2022 • Hang Li, Ahmed Mourad, Bevan Koopman, Guido Zuccon
Pseudo-Relevance Feedback (PRF) assumes that the top results retrieved by a first-stage ranker are relevant to the original query and uses them to improve the query representation for a second round of retrieval.
no code implementations • 30 Apr 2022 • Hang Li, Shuai Wang, Shengyao Zhuang, Ahmed Mourad, Xueguang Ma, Jimmy Lin, Guido Zuccon
In this paper we consider the problem of combining the relevance signals from sparse and dense retrievers in the context of Pseudo Relevance Feedback (PRF).
1 code implementation • 10 Apr 2022 • Yu Kang, Tianqiao Liu, Hang Li, Yang Hao, Wenbiao Ding
Our pre-training framework consists of the following components: (1) Intra-modal Denoising Auto-Encoding (IDAE), which is able to reconstruct input text (audio) representations from a noisy version of itself.
1 code implementation • 1 Apr 2022 • Shengyao Zhuang, Hang Li, Guido Zuccon
We then exploit such historic implicit interactions to improve the effectiveness of a DR. A key challenge that we study is the effect that biases in the click signal, such as position bias, have on the DRs.
2 code implementations • 20 Mar 2022 • Zhixuan Liu, ZiHao Wang, Yuan Lin, Hang Li
Deep neural networks, empowered by pre-trained language models, have achieved remarkable results in natural language understanding (NLU) tasks.
no code implementations • 2 Mar 2022 • Yuanshun Yao, Chong Wang, Hang Li
Modern recommender systems face an increasing need to explain their recommendations.
no code implementations • 1 Mar 2022 • Jiabao Wang, Yang Li, Xiu-Shen Wei, Hang Li, Zhuang Miao, Rui Zhang
Unsupervised learning technology has caught up with or even surpassed supervised learning technology in general object classification (GOC) and person re-identification (re-ID).
1 code implementation • 21 Dec 2021 • Hao Peng, Hang Li, Lei Hou, Juanzi Li, chao qiao
We also develop a dataset for the problem using an existing MKB.
1 code implementation • 13 Dec 2021 • Hang Li, Shengyao Zhuang, Ahmed Mourad, Xueguang Ma, Jimmy Lin, Guido Zuccon
Finally, we contribute a study of the generalisability of the ANCE-PRF method when dense retrievers other than ANCE are used for the first round of retrieval and for encoding the PRF signal.
no code implementations • NeurIPS 2021 • Haoyang Li, Xin Wang, Ziwei Zhang, Zehuan Yuan, Hang Li, Wenwu Zhu
Then we propose a novel factor-wise discrimination objective in a contrastive learning manner, which can force the factorized representations to independently reflect the expressive information from different latent factors.
1 code implementation • 16 Nov 2021 • Yan Zeng, Xinsong Zhang, Hang Li
Most existing methods in vision language pre-training rely on object-centric features extracted through object detection and make fine-grained alignments between the extracted features and texts.
Ranked #1 on
Image Retrieval
on Flickr30K 1K test
2 code implementations • 14 Oct 2021 • Feng Wang, Tao Kong, Rufeng Zhang, Huaping Liu, Hang Li
To solve this problem, we propose to maximize the mutual information between the input and the class predictions.
Ranked #1 on
Image Classification
on Oxford-IIIT Pet Dataset
Fine-Grained Image Classification
Representation Learning
+5
no code implementations • 27 Sep 2021 • Volker Tresp, Sahand Sharifzadeh, Hang Li, Dario Konopatzki, Yunpu Ma
Although memory appears to be about the past, its main purpose is to support the agent in the present and the future.
1 code implementation • ACL 2022 • Xueqing Wu, Jiacheng Zhang, Hang Li
We first employ a seq2seq model fine-tuned from a pre-trained language model to perform the task.
1 code implementation • EMNLP 2021 • Hang Li, Yu Kang, Tianqiao Liu, Wenbiao Ding, Zitao Liu
Existing audio-language task-specific predictive approaches focus on building complicated late-fusion mechanisms.
no code implementations • Findings (EMNLP) 2021 • Tao Wang, Chengqi Zhao, Mingxuan Wang, Lei LI, Hang Li, Deyi Xiong
This paper presents Self-correcting Encoding (Secoco), a framework that effectively deals with input noise for robust neural machine translation by introducing self-correcting predictors.
1 code implementation • 25 Aug 2021 • Hang Li, Ahmed Mourad, Shengyao Zhuang, Bevan Koopman, Guido Zuccon
Text-based PRF results show that the use of PRF had a mixed effect on deep rerankers across different datasets.
1 code implementation • 15 Jul 2021 • Hang Li, Yu Kang, Yang Hao, Wenbiao Ding, Zhongqin Wu, Zitao Liu
The quality of vocal delivery is one of the key indicators for evaluating teacher enthusiasm, which has been widely accepted to be connected to the overall course qualities.
1 code implementation • 15 Jul 2021 • Yang Hao, Hang Li, Wenbiao Ding, Zhongqin Wu, Jiliang Tang, Rose Luckin, Zitao Liu
In this work, we study computational approaches to detect online dialogic instructions, which are widely used to help students understand learning materials, and build effective study habits.
no code implementations • 15 Jul 2021 • Jiahao Chen, Hang Li, Wenbiao Ding, Zitao Liu
In this paper, we propose a simple yet effective solution to build practical teacher recommender systems for online one-on-one classes.
1 code implementation • 13 Jul 2021 • Rajat Koner, Hang Li, Marcel Hildebrandt, Deepan Das, Volker Tresp, Stephan Günnemann
We conduct an experimental study on the challenging dataset GQA, based on both manually curated and automatically generated scene graphs.
1 code implementation • 7 Jul 2021 • Xiaohan Xing, Yuenan Hou, Hang Li, Yixuan Yuan, Hongsheng Li, Max Q. -H. Meng
With the contribution of the CCD and CRP, our CRCKD algorithm can distill the relational knowledge more comprehensively.
no code implementations • 18 Mar 2021 • Aili Shen, Meladel Mistica, Bahar Salehi, Hang Li, Timothy Baldwin, Jianzhong Qi
While pretrained language models ("LM") have driven impressive gains over morpho-syntactic and semantic tasks, their ability to model discourse and pragmatic phenomena is less clear.
1 code implementation • 2 Feb 2021 • Sensong An, Bowen Zheng, Mikhail Y. Shalaginov, Hong Tang, Hang Li, Li Zhou, Yunxi Dong, Mohammad Haerinia, Anuradha Murthy Agarwal, Clara Rivero-Baleine, Myungkoo Kang, Kathleen A. Richardson, Tian Gu, Juejun Hu, Clayton Fowler, Hualiang Zhang
Metasurfaces have provided a novel and promising platform for the realization of compact and large-scale optical devices.
1 code implementation • ACL 2021 • Yue Feng, Yang Wang, Hang Li
This paper is concerned with dialogue state tracking (DST) in a task-oriented dialogue system.
Ranked #1 on
Classification
on SGD
1 code implementation • EMNLP 2021 • Tianqiao Liu, Qiang Fang, Wenbiao Ding, Hang Li, Zhongqin Wu, Zitao Liu
There is an increasing interest in the use of mathematical word problem (MWP) generation in educational assessment.
no code implementations • Findings (ACL) 2021 • Xinsong Zhang, Pengshuai Li, Hang Li
In fact, both fine-grained and coarse-grained tokenizations have advantages and disadvantages for learning of pre-trained language models.
no code implementations • 17 Jul 2020 • Hang Li, Dong Wei, Shilei Cao, Kai Ma, Liansheng Wang, Yefeng Zheng
If a superpixel intersects with the annotation boundary, we consider a high probability of uncertain labeling within this area.
no code implementations • 2 Jul 2020 • Marcel Hildebrandt, Hang Li, Rajat Koner, Volker Tresp, Stephan Günnemann
We propose a novel method that approaches the task by performing context-driven, sequential reasoning based on the objects and their semantic and spatial relationships present in the scene.
1 code implementation • ACL 2020 • Hayate Iso, chao qiao, Hang Li
We propose a novel text editing task, referred to as \textit{fact-based text editing}, in which the goal is to revise a given document to better describe the facts in a knowledge base (e. g., several triples).
Ranked #1 on
Fact-based Text Editing
on WebEdit
no code implementations • 21 May 2020 • Hang Li, Chen Ma, Wei Xu, Xue Liu
Building compact convolutional neural networks (CNNs) with reliable performance is a critical but challenging task, especially when deploying them in real-world applications.
no code implementations • 15 May 2020 • Hang Li, Zhiwei Wang, Jiliang Tang, Wenbiao Ding, Zitao Liu
Classroom activity detection (CAD) aims at accurately recognizing speaker roles (either teacher or student) in classrooms.
5 code implementations • ACL 2020 • Shaohua Zhang, Haoran Huang, Jicong Liu, Hang Li
A state-of-the-art method for the task selects a character from a list of candidates for correction (including non-correction) at each position of the sentence on the basis of BERT, the language representation model.
no code implementations • 21 Mar 2020 • Hang Li, Wenbiao Ding, Zitao Liu
We conduct a wide range of offline and online experiments to demonstrate the effectiveness of our approach.
1 code implementation • 1 Jan 2020 • Sensong An, Bowen Zheng, Mikhail Y. Shalaginov, Hong Tang, Hang Li, Li Zhou, Jun Ding, Anuradha Murthy Agarwal, Clara Rivero-Baleine, Myungkoo Kang, Kathleen A. Richardson, Tian Gu, Juejun Hu, Clayton Fowler, Hualiang Zhang
Metasurfaces have shown promising potentials in shaping optical wavefronts while remaining compact compared to bulky geometric optics devices.
no code implementations • 22 Oct 2019 • Hang Li, Yu Kang, Wenbiao Ding, Song Yang, Songfan Yang, Gale Yan Huang, Zitao Liu
The experimental results demonstrate the benefits of our approach on learning attention based neural network from classroom data with different modalities, and show our approach is able to outperform state-of-the-art baselines in terms of various evaluation metrics.
no code implementations • 1 Sep 2019 • Jiahao Chen, Hang Li, Wenxin Wang, Wenbiao Ding, Gale Yan Huang, Zitao Liu
To warn the unqualified instructors and ensure the overall education quality, we build a monitoring and alerting system by utilizing multimodal information from the online environment.
no code implementations • 13 Aug 2019 • Sensong An, Bowen Zheng, Hong Tang, Mikhail Y. Shalaginov, Li Zhou, Hang Li, Tian Gu, Juejun Hu, Clayton Fowler, Hualiang Zhang
Metasurfaces have enabled precise electromagnetic wave manipulation with strong potential to obtain unprecedented functionalities and multifunctional behavior in flat optical devices.
no code implementations • 8 Jun 2019 • Sensong An, Clayton Fowler, Bowen Zheng, Mikhail Y. Shalaginov, Hong Tang, Hang Li, Li Zhou, Jun Ding, Anuradha Murthy Agarwal, Clara Rivero-Baleine, Kathleen A. Richardson, Tian Gu, Juejun Hu, Hualiang Zhang
Metasurfaces have become a promising means for manipulating optical wavefronts in flat and high-performance optical devices.
no code implementations • 4 Jun 2019 • Xiaoying Zhang, Hong Xie, Hang Li, John C. S. Lui
Here, a key-term can relate to a subset of arms, for example, a category of articles in news recommendation.
no code implementations • 13 Mar 2019 • Shuai Ma, Jiahui Dai, Songtao Lu, Hang Li, Han Zhang, Chun Du, Shiyin Li
The dataset is available online, which contains eight types of modulated signals.
no code implementations • 25 Oct 2018 • Yilin Niu, chao qiao, Hang Li, Minlie Huang
Text similarity calculation is a fundamental problem in natural language processing and related fields.
1 code implementation • 16 Sep 2018 • Ziniu Hu, Yang Wang, Qu Peng, Hang Li
Although click data is widely used in search systems in practice, so far the inherent bias, most notably position bias, has prevented it from being used in training of a ranker for search, i. e., learning-to-rank.
no code implementations • 25 Jul 2018 • Qian Wang, Hang Li, Zhi Chen, Dou Zhao, Shuang Ye, Jiansheng Cai
In addition, we propose to use the convolutional recurrent neural network (CRNN)---a combination of the CNN and the RNN---to learn local and contextual information in CSI for user authentication.
no code implementations • EMNLP 2018 • Zichao Li, Xin Jiang, Lifeng Shang, Hang Li
The generator, built as a sequence-to-sequence learning model, can produce paraphrases given a sentence.
no code implementations • EMNLP 2017 • Piji Li, Wai Lam, Lidong Bing, Weiwei Guo, Hang Li
The attention weights are learned automatically by an unsupervised data reconstruction framework which can capture the sentence salience.
9 code implementations • 31 Jul 2017 • Zhenguo Li, Fengwei Zhou, Fei Chen, Hang Li
In contrast, meta-learning learns from many related tasks a meta-learner that can learn a new task more accurately and faster with fewer examples, where the choice of meta-learners is crucial.
1 code implementation • ACL 2017 • Hao Zhou, Zhaopeng Tu, Shu-Jian Huang, Xiaohua Liu, Hang Li, Jia-Jun Chen
In typical neural machine translation~(NMT), the decoder generates a sentence word by word, packing all linguistic granularities in the same time-scale of RNN.
no code implementations • SEMEVAL 2017 • Nabiha Asghar, Pascal Poupart, Xin Jiang, Hang Li
We propose an online, end-to-end, neural generative conversational model for open-domain dialogue.
no code implementations • ICML 2017 • Lili Mou, Zhengdong Lu, Hang Li, Zhi Jin
Building neural networks to query a knowledge base (a table) with natural language is an emerging research topic in deep learning.
1 code implementation • 7 Nov 2016 • Zhaopeng Tu, Yang Liu, Lifeng Shang, Xiaohua Liu, Hang Li
Although end-to-end Neural Machine Translation (NMT) has achieved remarkable progress in the past two years, it suffers from a major drawback: translations generated by NMT systems often lack of adequacy.
no code implementations • 17 Oct 2016 • Xing Wang, Zhengdong Lu, Zhaopeng Tu, Hang Li, Deyi Xiong, Min Zhang
Neural Machine Translation (NMT) is a new approach to machine translation that has made great progress in recent years.
no code implementations • COLING 2016 • Fandong Meng, Zhengdong Lu, Hang Li, Qun Liu
Conventional attention-based Neural Machine Translation (NMT) conducts dynamic alignment in generating the target sentence.
2 code implementations • TACL 2017 • Zhaopeng Tu, Yang Liu, Zhengdong Lu, Xiaohua Liu, Hang Li
In neural machine translation (NMT), generation of a target word depends on both source and target contexts.
no code implementations • EMNLP 2016 • Mingxuan Wang, Zhengdong Lu, Hang Li, Qun Liu
We propose to enhance the RNN decoder in a neural machine translator (NMT) with external memory, as a natural but powerful extension to the state in the decoding RNN.
no code implementations • 6 Jun 2016 • Yaohua Tang, Fandong Meng, Zhengdong Lu, Hang Li, Philip L. H. Yu
In this paper, we propose phraseNet, a neural machine translator with a phrase memory which stores phrase pairs in symbolic form, mined from corpus or specified by human experts.
no code implementations • NAACL 2016 • Long-Yue Wang, Zhaopeng Tu, Xiaojun Zhang, Hang Li, Andy Way, Qun Liu
Finally, we integrate the above outputs into our translation system to recall missing pronouns by both extracting rules from the DP-labelled training data and translating the DP-generated input sentences.
7 code implementations • ACL 2016 • Jiatao Gu, Zhengdong Lu, Hang Li, Victor O. K. Li
CopyNet can nicely integrate the regular way of word generation in the decoder with the new copying mechanism which can choose sub-sequences in the input sequence and put them at proper places in the output sequence.
3 code implementations • ACL 2016 • Zhaopeng Tu, Zhengdong Lu, Yang Liu, Xiaohua Liu, Hang Li
Attention mechanism has enhanced state-of-the-art Neural Machine Translation (NMT) by jointly learning to align and translate.
1 code implementation • WS 2016 • Jun Yin, Xin Jiang, Zhengdong Lu, Lifeng Shang, Hang Li, Xiaoming Li
Empirical study shows the proposed model can effectively deal with the variations of questions and answers, and generate right and natural answers by referring to the facts in the knowledge-base.
no code implementations • 3 Dec 2015 • Pengcheng Yin, Zhengdong Lu, Hang Li, Ben Kao
Neural Enquirer can be trained with gradient descent, with which not only the parameters of the controlling components and semantic parsing component, but also the embeddings of the tables and query words can be learned from scratch.
1 code implementation • 22 Aug 2015 • Baolin Peng, Zhengdong Lu, Hang Li, Kam-Fai Wong
For example, it improves the accuracy on Path Finding(10K) from 33. 4% [6] to over 98%.
no code implementations • 22 Jun 2015 • Fandong Meng, Zhengdong Lu, Zhaopeng Tu, Hang Li, Qun Liu
We propose DEEPMEMORY, a novel deep architecture for sequence-to-sequence learning, which performs the task through a series of nonlinear transformations from the representation of the input sequence (e. g., a Chinese sentence) to the final output sequence (e. g., translation to English).
no code implementations • 1 Jun 2015 • Lin Ma, Zhengdong Lu, Hang Li
We demonstrate the efficacy of our proposed model on the DAQUAR and COCO-QA datasets, which are two benchmark datasets for the image QA, with the performances significantly outperforming the state-of-the-art.
no code implementations • 28 Apr 2015 • Piji Li, Lidong Bing, Wai Lam, Hang Li, Yi Liao
We propose a new MDS paradigm called reader-aware multi-document summarization (RA-MDS).
3 code implementations • ICCV 2015 • Lin Ma, Zhengdong Lu, Lifeng Shang, Hang Li
In this paper, we propose multimodal convolutional neural networks (m-CNNs) for matching image and sentence.
Ranked #15 on
Image Retrieval
on Flickr30K 1K test
no code implementations • 17 Mar 2015 • Mingxuan Wang, Zhengdong Lu, Hang Li, Wenbin Jiang, Qun Liu
Different from previous work on neural network-based language modeling and generation (e. g., RNN or LSTM), we choose not to greedily summarize the history of words as a fixed length vector.
2 code implementations • NeurIPS 2014 • Baotian Hu, Zhengdong Lu, Hang Li, Qingcai Chen
Semantic matching is of central importance to many natural language tasks \cite{bordes2014semantic, RetrievalQA}.
Ranked #3 on
Question Answering
on SemEvalCQA
no code implementations • 9 Mar 2015 • Mingxuan Wang, Zhengdong Lu, Hang Li, Qun Liu
Many tasks in natural language processing, ranging from machine translation to question answering, can be reduced to the problem of matching two sentences or more generally two short texts.
no code implementations • IJCNLP 2015 • Zhaopeng Tu, Baotian Hu, Zhengdong Lu, Hang Li
We propose a novel method for translation selection in statistical machine translation, in which a convolutional neural network is employed to judge the similarity between a phrase pair in two languages.
4 code implementations • IJCNLP 2015 • Lifeng Shang, Zhengdong Lu, Hang Li
We propose Neural Responding Machine (NRM), a neural network-based response generator for Short-Text Conversation.
no code implementations • IJCNLP 2015 • Fandong Meng, Zhengdong Lu, Mingxuan Wang, Hang Li, Wenbin Jiang, Qun Liu
The recently proposed neural network joint model (NNJM) (Devlin et al., 2014) augments the n-gram target language model with a heuristically chosen source context window, achieving state-of-the-art performance in SMT.
no code implementations • 22 Oct 2014 • Jingbo Shang, Tianqi Chen, Hang Li, Zhengdong Lu, Yong Yu
In this paper, we tackle this challenge with a novel parallel and efficient algorithm for feature-based matrix factorization.
1 code implementation • 29 Aug 2014 • Zongcheng Ji, Zhengdong Lu, Hang Li
Human computer conversation is regarded as one of the most difficult problems in artificial intelligence.
no code implementations • NeurIPS 2013 • Zhengdong Lu, Hang Li
Many machine learning problems can be interpreted as learning for matching two types of objects (e. g., images and captions, users and products, queries and documents).
no code implementations • NeurIPS 2009 • Wei Chen, Tie-Yan Liu, Yanyan Lan, Zhi-Ming Ma, Hang Li
We show that these loss functions are upper bounds of the measure-based ranking errors.
no code implementations • NeurIPS 2009 • Fen Xia, Tie-Yan Liu, Hang Li
This paper aims to analyze whether existing listwise ranking methods are statistically consistent in the top-k setting.
no code implementations • NeurIPS 2008 • Tao Qin, Tie-Yan Liu, Xu-Dong Zhang, De-Sheng Wang, Hang Li
It can naturally represent the content information of objects as well as the relation information between objects, necessary for global ranking.