1 code implementation • 4 Dec 2024 • Xiaojun Xu, Jinghan Jia, Yuanshun Yao, Yang Liu, Hang Li
To embed our multi-bit watermark, we use two paraphrasers alternatively to encode the pre-defined binary code at the sentence level.
no code implementations • 8 Oct 2024 • Chi-Lam Cheang, Guangzeng Chen, Ya Jing, Tao Kong, Hang Li, Yifeng Li, Yuxiao Liu, Hongtao Wu, Jiafeng Xu, Yichu Yang, Hanbo Zhang, Minzhao Zhu
We present GR-2, a state-of-the-art generalist robot agent for versatile and generalizable robot manipulation.
no code implementations • 7 Oct 2024 • Haokun Chen, Hang Li, Yao Zhang, Gengyuan Zhang, Jinhe Bi, Philip Torr, Jindong Gu, Denis Krompass, Volker Tresp
However, directly applying pretrained LDM to heterogeneous OSFL results in significant distribution shifts in synthetic data, leading to performance degradation in classification models trained on such data.
no code implementations • 6 Oct 2024 • Yibo Yan, Shen Wang, Jiahao Huo, Hang Li, Boyan Li, Jiamin Su, Xiong Gao, Yi-Fan Zhang, Tianlong Xu, Zhendong Chu, Aoxiao Zhong, Kun Wang, Hui Xiong, Philip S. Yu, Xuming Hu, Qingsong Wen
As the field of Multimodal Large Language Models (MLLMs) continues to evolve, their potential to revolutionize artificial intelligence is particularly promising, especially in addressing mathematical reasoning tasks.
no code implementations • 3 Oct 2024 • Yucheng Chu, Hang Li, Kaiqi Yang, Harry Shomer, Hui Liu, Yasemin Copur-Gencturk, Jiliang Tang
Open-ended short-answer questions (SAGs) have been widely recognized as a powerful tool for providing deeper insights into learners' responses in the context of learning analytics (LA).
no code implementations • 23 Sep 2024 • Guokang Wang, Hang Li, Shuyuan Zhang, Yanhong Liu, Huaping Liu
In real-world scenarios, many robotic manipulation tasks are hindered by occlusions and limited fields of view, posing significant challenges for passive observation-based models that rely on fixed or wrist-mounted cameras.
no code implementations • 19 Sep 2024 • Volker Tresp, Hang Li
The tensor brain has two major layers: the representation layer and the index layer.
no code implementations • 13 Sep 2024 • Hang Li, Wei Jin, Geri Skenderi, Harry Shomer, Wenzhuo Tang, Wenqi Fan, Jiliang Tang
In particular, we treat link prediction between a pair of nodes as a conditional likelihood estimation of its enclosing sub-graph.
no code implementations • 12 Sep 2024 • Hang Li, Tianlong Xu, Ethan Chang, Qingsong Wen
Knowledge tagging for questions is vital in modern intelligent educational applications, including learning progress diagnosis, practice question recommendations, and course content organization.
no code implementations • 20 Aug 2024 • Zijian Zhao, TingWei Chen, Zhijie Cai, Xiaoyang Li, Hang Li, Qimei Chen, Guangxu Zhu
Extensive research has been conducted in this field, focusing on areas such as gesture recognition, people identification, and fall detection.
Ranked #1 on Action Classification (zero-shot) on WiGesture
1 code implementation • 31 Jul 2024 • Shanbo Cheng, Zhichao Huang, Tom Ko, Hang Li, Ningxin Peng, Lu Xu, Qini Zhang
Aligned with professional human interpreters, we evaluate CLASI with a better human evaluation metric, valid information proportion (VIP), which measures the amount of information that can be successfully conveyed to the listeners.
no code implementations • 24 Jul 2024 • Hang Li, Hongming Yang, Qinghua Guo, J. Andrew Zhang, Yang Xiang, Yashan Pang
In this work, we investigate sensing parameter estimation in the presence of clutter in perceptive mobile networks (PMNs) that integrate radar sensing into mobile communications.
no code implementations • 15 Jul 2024 • Hang Li, Qiankun Dong, Xueshuo Xie, Xia Xu, Tao Li, Zhenwei Shi
To effectively perform multitemporal hyperspectral image unmixing, we introduce two key modules: the Global Awareness Module (GAM) and the Change Enhancement Module (CEM).
no code implementations • 11 Jul 2024 • Shuai Ma, Chuanhui Zhang, Bin Shen, Youlong Wu, Hang Li, Shiyin Li, Guangming Shi, Naofal Al-Dhahir
To address these challenges, in this paper, we propose a novel discrete semantic feature division multiple access (SFDMA) paradigm for multi-user digital interference networks.
no code implementations • 19 Jun 2024 • Hang Li, Tianlong Xu, Jiliang Tang, Qingsong Wen
Knowledge tagging for questions plays a crucial role in contemporary intelligent educational applications, including learning progress diagnosis, practice question recommendations, and course content organization.
1 code implementation • 16 Jun 2024 • Rui Zheng, Hongyi Guo, Zhihan Liu, Xiaoying Zhang, Yuanshun Yao, Xiaojun Xu, Zhaoran Wang, Zhiheng Xi, Tao Gui, Qi Zhang, Xuanjing Huang, Hang Li, Yang Liu
We theoretically demonstrate that this iterative reinforcement learning optimization converges to a Nash Equilibrium for the game induced by the agents.
no code implementations • 1 Jun 2024 • Zhi Zheng, Qian Feng, Hang Li, Alois Knoll, Jianxiang Feng
As a general-purpose reasoning machine, LLMs or Multimodal Large Language Models (MLLMs) are promising for detecting failures.
1 code implementation • 23 May 2024 • Peiyuan Feng, Yichen He, Guanhua Huang, Yuan Lin, Hanchong Zhang, Yuchen Zhang, Hang Li
Our ablation study highlights the indispensability of memory, tools, consultation, reflection, and reinforcement learning in achieving the agent's strong performance.
no code implementations • 26 Mar 2024 • Shen Wang, Tianlong Xu, Hang Li, Chaoli Zhang, Joleen Liang, Jiliang Tang, Philip S. Yu, Qingsong Wen
The advent of Large Language Models (LLMs) has brought in a new era of possibilities in the realm of education.
no code implementations • 26 Mar 2024 • Hang Li, Tianlong Xu, Jiliang Tang, Qingsong Wen
Knowledge concept tagging for questions plays a crucial role in contemporary intelligent educational applications, including learning progress diagnosis, practice question recommendations, and course content organization.
no code implementations • 25 Mar 2024 • Xiaojie Li, Songyang Zhang, Hang Li, Xiaoyang Li, Lexi Xu, Haigao Xu, Hui Mei, Guangxu Zhu, Nan Qi, Ming Xiao
Multi-band radiomap reconstruction (MB-RMR) is a key component in wireless communications for tasks such as spectrum management and network planning.
no code implementations • 22 Mar 2024 • Kaiqi Yang, Yucheng Chu, Taylor Darwin, Ahreum Han, Hang Li, Hongzhi Wen, Yasemin Copur-Gencturk, Jiliang Tang, Hui Liu
Teachers' mathematical content knowledge (CK) is of vital importance and need in teacher professional development (PD) programs.
no code implementations • 19 Mar 2024 • Boren Li, Hang Li, Hangxin Liu
Animatronic robots hold the promise of enabling natural human-robot interaction through lifelike facial expressions.
1 code implementation • 19 Mar 2024 • Zijian Zhao, TingWei Chen, Fanyi Meng, Hang Li, Xiaoyang Li, Guangxu Zhu
Despite the development of various deep learning methods for Wi-Fi sensing, package loss often results in noncontinuous estimation of the Channel State Information (CSI), which negatively impacts the performance of the learning models.
Ranked #2 on Person Identification on WiGesture
no code implementations • 13 Feb 2024 • Sijia Liu, Yuanshun Yao, Jinghan Jia, Stephen Casper, Nathalie Baracaldo, Peter Hase, Yuguang Yao, Chris Yuhao Liu, Xiaojun Xu, Hang Li, Kush R. Varshney, Mohit Bansal, Sanmi Koyejo, Yang Liu
We explore machine unlearning (MU) in the domain of large language models (LLMs), referred to as LLM unlearning.
no code implementations • 12 Feb 2024 • Yang Liu, Peng Sun, Hang Li
By formally defining the training processes of large language models (LLMs), which usually encompasses pre-training, supervised fine-tuning, and reinforcement learning with human feedback, within a single and unified machine learning paradigm, we can glean pivotal insights for advancing LLM technologies.
no code implementations • 2 Feb 2024 • Hang Li, Tianlong Xu, Chaoli Zhang, Eason Chen, Jing Liang, Xing Fan, Haoyang Li, Jiliang Tang, Qingsong Wen
The recent surge in generative AI technologies, such as large language models and diffusion models, has boosted the development of AI applications in various domains, including science, finance, and education.
1 code implementation • 2 Feb 2024 • Jiawei Wang, Yuchen Zhang, Jiaxin Zou, Yan Zeng, Guoqiang Wei, Liping Yuan, Hang Li
Its robust motion controllability is validated by drastic increases in the bounding box alignment metric.
no code implementations • 24 Jan 2024 • Chuting Yu, Hang Li, Ahmed Mourad, Bevan Koopman, Guido Zuccon
This paper considers Pseudo-Relevance Feedback (PRF) methods for dense retrievers in a resource constrained environment such as that of cheap cloud instances or embedded systems (e. g., smartphones and smartwatches), where memory and CPU are limited and GPUs are not present.
1 code implementation • 17 Jan 2024 • Trung Quoc Luong, Xinbo Zhang, Zhanming Jie, Peng Sun, Xiaoran Jin, Hang Li
ReFT first warmups the model with SFT, and then employs on-line reinforcement learning, specifically the PPO algorithm in this paper, to further fine-tune the model, where an abundance of reasoning paths are automatically sampled given the question and the rewards are naturally derived from the ground-truth answers.
no code implementations • 21 Dec 2023 • Zhichao Huang, Rong Ye, Tom Ko, Qianqian Dong, Shanbo Cheng, Mingxuan Wang, Hang Li
Given the great success of large language models (LLMs) across various tasks, in this paper, we introduce LLM-ST, a novel and effective speech translation model constructed upon a pre-trained LLM.
3 code implementations • 20 Dec 2023 • Hongtao Wu, Ya Jing, Chilam Cheang, Guangzeng Chen, Jiafeng Xu, Xinghang Li, Minghuan Liu, Hang Li, Tao Kong
In this paper, we extend the scope of this effectiveness by showing that visual robot manipulation can significantly benefit from large-scale video generative pre-training.
Ranked #4 on Zero-shot Generalization on CALVIN (using extra training data)
no code implementations • 20 Dec 2023 • Shuai Ma, Haihong Sheng, Junchang Sun, Hang Li, Xiaodong Liu, Chen Qiu, Majid Safari, Naofal Al-Dhahir, Shiyin Li
Then, we derive the expression of LiFi transmission rate based on the m-pulse-amplitude-modulation (M-PAM).
no code implementations • 6 Dec 2023 • Cong Zhang, Chi Tian, Tianfang Han, Hang Li, Yiheng Feng, Yunfeng Chen, Robert W. Proctor, Jiansong Zhang
A real-world roundabout in Ann Arbor, Michigan was built in the co-simulation platform as the study area, and the merging scenarios were investigated.
1 code implementation • CVPR 2024 • Hang Li, Chengzhi Shen, Philip Torr, Volker Tresp, Jindong Gu
A risk with these models is the potential generation of inappropriate content, such as biased or harmful images.
no code implementations • CVPR 2024 • Yan Zeng, Guoqiang Wei, Jiani Zheng, Jiaxin Zou, Yang Wei, Yuchen Zhang, Hang Li
Creating high-dynamic videos such as motion-rich actions and sophisticated visual effects poses a significant challenge in the field of artificial intelligence.
Ranked #3 on Text-to-Video Generation on UCF-101
no code implementations • 7 Nov 2023 • Ugur Sahin, Hang Li, Qadeer Khan, Daniel Cremers, Volker Tresp
Leveraging these generative hard negative samples, we significantly enhance VLMs' performance in tasks involving multimodal compositional reasoning.
no code implementations • 2 Nov 2023 • Xinghang Li, Minghuan Liu, Hanbo Zhang, Cunjun Yu, Jie Xu, Hongtao Wu, Chilam Cheang, Ya Jing, Weinan Zhang, Huaping Liu, Hang Li, Tao Kong
We believe RoboFlamingo has the potential to be a cost-effective and easy-to-use solution for robotics manipulation, empowering everyone with the ability to fine-tune their own robotics policy.
no code implementations • 9 Oct 2023 • Yegor Klochkov, Jean-Francois Ton, Ruocheng Guo, Yang Liu, Hang Li
We address the problem of concept removal in deep neural networks, aiming to learn representations that do not encode certain specified concepts (e. g., gender etc.)
1 code implementation • 27 Sep 2023 • Geri Skenderi, Hang Li, Jiliang Tang, Marco Cristani
They aim to learn an energy-based model by predicting the latent representation of a target signal y from the latent representation of a context signal x. JEPAs bypass the need for negative and positive samples, traditionally required by contrastive learning while avoiding the overfitting issues associated with generative pretraining.
Ranked #11 on Graph Classification on REDDIT-B
1 code implementation • 20 Sep 2023 • Zhanming Jie, Trung Quoc Luong, Xinbo Zhang, Xiaoran Jin, Hang Li
We also find that Python is a better choice of language than Wolfram for program CoTs.
1 code implementation • 10 Aug 2023 • Yang Liu, Yuanshun Yao, Jean-Francois Ton, Xiaoying Zhang, Ruocheng Guo, Hao Cheng, Yegor Klochkov, Muhammad Faaiz Taufiq, Hang Li
However, a major challenge faced by practitioners is the lack of clear guidance on evaluating whether LLM outputs align with social norms, values, and regulations.
2 code implementations • 7 Jul 2023 • Zhikai Chen, Haitao Mao, Hang Li, Wei Jin, Hongzhi Wen, Xiaochi Wei, Shuaiqiang Wang, Dawei Yin, Wenqi Fan, Hui Liu, Jiliang Tang
The most popular pipeline for learning on graphs with textual node attributes primarily relies on Graph Neural Networks (GNNs), and utilizes shallow text embedding as initial node representations, which has limitations in general knowledge and profound semantic understanding.
no code implementations • 12 Jun 2023 • Ruocheng Guo, Jean-François Ton, Yang Liu, Hang Li
Widely used deterministic LTR models can lead to unfair exposure distribution, especially when items with the same relevance receive slightly different ranking scores.
no code implementations • 19 May 2023 • Xi Yang, Hang Li, Qinghua Guo, J. Andrew Zhang, Xiaojing Huang, Zhiqun Cheng
In this work, we study sensing-aided uplink transmission in an integrated sensing and communication (ISAC) vehicular network with the use of orthogonal time frequency space (OTFS) modulation.
no code implementations • 3 Mar 2023 • Shuai Ma, Weining Qiao, Youlong Wu, Hang Li, Guangming Shi, Dahua Gao, Yuanming Shi, Shiyin Li, Naofal Al-Dhahir
Instead of broadcasting all extracted features, the semantic encoder extracts the disentangled semantic features, and then only the users' intended semantic features are selected for broadcasting, which can further improve the transmission efficiency.
no code implementations • 27 Feb 2023 • Shuai Ma, Weining Qiao, Youlong Wu, Hang Li, Guangming Shi, Dahua Gao, Yuanming Shi, Shiyin Li, Naofal Al-Dhahir
Furthermore, based on the $\beta $-variational autoencoder ($\beta $-VAE), we propose a practical explainable semantic communication system design, which simultaneously achieves semantic features selection and is robust against semantic channel noise.
1 code implementation • 6 Feb 2023 • Chengyi Liu, Wenqi Fan, Yunqing Liu, Jiatong Li, Hang Li, Hui Liu, Jiliang Tang, Qing Li
Given the great success of diffusion models in image generation, increasing efforts have been made to leverage these techniques to advance graph generation in recent years.
no code implementations • 2 Feb 2023 • Lingli He, Jiahui Sun, Yiwei Gao, Bin Li, Yuhang Wang, Yanli Dong, Weidong An, Hang Li, Bei Yang, Yuhan Ge, Xuejun Cai Zhang, Yun Stone Shi, Yan Zhao
Glutamate-gated kainate receptors (KARs) are ubiquitous in the central nervous system of vertebrates, mediate synaptic transmission on post-synapse, and modulate transmitter release on pre-synapse.
1 code implementation • 13 Jan 2023 • Xiaoying Zhang, Hongning Wang, Hang Li
This calls for a fine-grained understanding of a user's preferences over items, where one needs to recognize the user's choice is driven by the quality of the item itself, or the pre-selected attributes of the item.
1 code implementation • 12 Jan 2023 • Xinsong Zhang, Yan Zeng, Jipeng Zhang, Hang Li
X-FM has one language encoder, one vision encoder, and one fusion encoder, as well as a new training method.
Ranked #3 on Visual Reasoning on NLVR2 Test
no code implementations • ICCV 2023 • Hang Li, Jindong Gu, Rajat Koner, Sahand Sharifzadeh, Volker Tresp
To study this question, we propose a reconstruction task where Flamingo generates a description for a given image and DALL-E uses this description as input to synthesize a new image.
1 code implementation • 21 Dec 2022 • Bevan Koopman, Ahmed Mourad, Hang Li, Anton van der Vegt, Shengyao Zhuang, Simon Gibson, Yash Dang, David Lawrence, Guido Zuccon
On the basis of these needs we release an information retrieval test collection comprising real questions, a large collection of scientific documents split in passages, and ground truth relevance assessments indicating which passages are relevant to each question.
no code implementations • 21 Dec 2022 • Shuai Ma, Jing Wang, Chun Du, Hang Li, Xiaodong Liu, Youlong Wu, Naofal Al-Dhahir, Shiyin Li
To address this challenge, we propose an alternating optimization algorithm to obtain the transmit beamforming and the PD orientation.
1 code implementation • 18 Dec 2022 • Shuai Wang, Hang Li, Guido Zuccon
One challenge to creating an effective systematic review Boolean query is the selection of effective MeSH Terms to include in the query.
2 code implementations • 22 Nov 2022 • Yan Zeng, Xinsong Zhang, Hang Li, Jiawei Wang, Jipeng Zhang, Wangchunshu Zhou
Vision language pre-training aims to learn alignments between vision and language from a large amount of data.
Ranked #1 on Cross-Modal Retrieval on Flickr30k (using extra training data)
no code implementations • 17 Nov 2022 • Yuanshun Yao, Chong Wang, Hang Li
The key idea is to train a surrogate model to learn the effect of removing a subset of user history on the recommendation.
1 code implementation • 6 Oct 2022 • Zhaowei Zhu, Yuanshun Yao, Jiankai Sun, Hang Li, Yang Liu
Our theoretical analyses show that directly using proxy models can give a false sense of (un)fairness.
no code implementations • 14 Aug 2022 • Wenyan Liu, Juncheng Wan, Xiaoling Wang, Weinan Zhang, Dell Zhang, Hang Li
In this paper, we investigate fast machine unlearning techniques for recommender systems that can remove the effect of a small amount of training data from the recommendation model without incurring the full cost of retraining.
1 code implementation • 13 Jun 2022 • Hang Li, Qadeer Khan, Volker Tresp, Daniel Cremers
The human brain can be considered to be a graphical structure comprising of tens of billions of biological neurons connected by synapses.
1 code implementation • 3 Jun 2022 • Tong Liu, Yushan Liu, Marcel Hildebrandt, Mitchell Joblin, Hang Li, Volker Tresp
We investigate the calibration of graph neural networks for node classification, study the effect of existing post-processing calibration methods, and analyze the influence of model capacity, graph density, and a new loss function on calibration.
1 code implementation • 16 May 2022 • Fei Huang, Hao Zhou, Yang Liu, Hang Li, Minlie Huang
Non-autoregressive Transformers (NATs) significantly reduce the decoding latency by generating all tokens in parallel.
no code implementations • 12 May 2022 • Hang Li, Ahmed Mourad, Bevan Koopman, Guido Zuccon
Pseudo-Relevance Feedback (PRF) assumes that the top results retrieved by a first-stage ranker are relevant to the original query and uses them to improve the query representation for a second round of retrieval.
no code implementations • 30 Apr 2022 • Hang Li, Shuai Wang, Shengyao Zhuang, Ahmed Mourad, Xueguang Ma, Jimmy Lin, Guido Zuccon
In this paper we consider the problem of combining the relevance signals from sparse and dense retrievers in the context of Pseudo Relevance Feedback (PRF).
1 code implementation • 10 Apr 2022 • Yu Kang, Tianqiao Liu, Hang Li, Yang Hao, Wenbiao Ding
Our pre-training framework consists of the following components: (1) Intra-modal Denoising Auto-Encoding (IDAE), which is able to reconstruct input text (audio) representations from a noisy version of itself.
1 code implementation • 1 Apr 2022 • Shengyao Zhuang, Hang Li, Guido Zuccon
We then exploit such historic implicit interactions to improve the effectiveness of a DR. A key challenge that we study is the effect that biases in the click signal, such as position bias, have on the DRs.
2 code implementations • 20 Mar 2022 • Zhixuan Liu, ZiHao Wang, Yuan Lin, Hang Li
Deep neural networks, empowered by pre-trained language models, have achieved remarkable results in natural language understanding (NLU) tasks.
no code implementations • 2 Mar 2022 • Yuanshun Yao, Chong Wang, Hang Li
Modern recommender systems face an increasing need to explain their recommendations.
no code implementations • 1 Mar 2022 • Jiabao Wang, Yang Li, Xiu-Shen Wei, Hang Li, Zhuang Miao, Rui Zhang
Unsupervised learning technology has caught up with or even surpassed supervised learning technology in general object classification (GOC) and person re-identification (re-ID).
1 code implementation • 21 Dec 2021 • Hao Peng, Hang Li, Lei Hou, Juanzi Li, chao qiao
We also develop a dataset for the problem using an existing MKB.
1 code implementation • 13 Dec 2021 • Hang Li, Shengyao Zhuang, Ahmed Mourad, Xueguang Ma, Jimmy Lin, Guido Zuccon
Finally, we contribute a study of the generalisability of the ANCE-PRF method when dense retrievers other than ANCE are used for the first round of retrieval and for encoding the PRF signal.
no code implementations • NeurIPS 2021 • Haoyang Li, Xin Wang, Ziwei Zhang, Zehuan Yuan, Hang Li, Wenwu Zhu
Then we propose a novel factor-wise discrimination objective in a contrastive learning manner, which can force the factorized representations to independently reflect the expressive information from different latent factors.
1 code implementation • 16 Nov 2021 • Yan Zeng, Xinsong Zhang, Hang Li
Most existing methods in vision language pre-training rely on object-centric features extracted through object detection and make fine-grained alignments between the extracted features and texts.
Ranked #1 on Image Retrieval on Flickr30K 1K test (using extra training data)
2 code implementations • 14 Oct 2021 • Feng Wang, Tao Kong, Rufeng Zhang, Huaping Liu, Hang Li
To solve this problem, we propose to maximize the mutual information between the input and the class predictions.
Ranked #1 on Image Classification on Oxford-IIIT Pet Dataset
Fine-Grained Image Classification Representation Learning +5
1 code implementation • 27 Sep 2021 • Volker Tresp, Sahand Sharifzadeh, Hang Li, Dario Konopatzki, Yunpu Ma
Although memory appears to be about the past, its main purpose is to support the agent in the present and the future.
1 code implementation • ACL 2022 • Xueqing Wu, Jiacheng Zhang, Hang Li
We first employ a seq2seq model fine-tuned from a pre-trained language model to perform the task.
1 code implementation • EMNLP 2021 • Hang Li, Yu Kang, Tianqiao Liu, Wenbiao Ding, Zitao Liu
Existing audio-language task-specific predictive approaches focus on building complicated late-fusion mechanisms.
no code implementations • Findings (EMNLP) 2021 • Tao Wang, Chengqi Zhao, Mingxuan Wang, Lei LI, Hang Li, Deyi Xiong
This paper presents Self-correcting Encoding (Secoco), a framework that effectively deals with input noise for robust neural machine translation by introducing self-correcting predictors.
1 code implementation • 25 Aug 2021 • Hang Li, Ahmed Mourad, Shengyao Zhuang, Bevan Koopman, Guido Zuccon
Text-based PRF results show that the use of PRF had a mixed effect on deep rerankers across different datasets.
1 code implementation • 15 Jul 2021 • Yang Hao, Hang Li, Wenbiao Ding, Zhongqin Wu, Jiliang Tang, Rose Luckin, Zitao Liu
In this work, we study computational approaches to detect online dialogic instructions, which are widely used to help students understand learning materials, and build effective study habits.
no code implementations • 15 Jul 2021 • Jiahao Chen, Hang Li, Wenbiao Ding, Zitao Liu
In this paper, we propose a simple yet effective solution to build practical teacher recommender systems for online one-on-one classes.
1 code implementation • 15 Jul 2021 • Hang Li, Yu Kang, Yang Hao, Wenbiao Ding, Zhongqin Wu, Zitao Liu
The quality of vocal delivery is one of the key indicators for evaluating teacher enthusiasm, which has been widely accepted to be connected to the overall course qualities.
1 code implementation • 13 Jul 2021 • Rajat Koner, Hang Li, Marcel Hildebrandt, Deepan Das, Volker Tresp, Stephan Günnemann
We conduct an experimental study on the challenging dataset GQA, based on both manually curated and automatically generated scene graphs.
1 code implementation • 7 Jul 2021 • Xiaohan Xing, Yuenan Hou, Hang Li, Yixuan Yuan, Hongsheng Li, Max Q. -H. Meng
With the contribution of the CCD and CRP, our CRCKD algorithm can distill the relational knowledge more comprehensively.
no code implementations • 18 Mar 2021 • Aili Shen, Meladel Mistica, Bahar Salehi, Hang Li, Timothy Baldwin, Jianzhong Qi
While pretrained language models ("LM") have driven impressive gains over morpho-syntactic and semantic tasks, their ability to model discourse and pragmatic phenomena is less clear.
1 code implementation • 2 Feb 2021 • Sensong An, Bowen Zheng, Mikhail Y. Shalaginov, Hong Tang, Hang Li, Li Zhou, Yunxi Dong, Mohammad Haerinia, Anuradha Murthy Agarwal, Clara Rivero-Baleine, Myungkoo Kang, Kathleen A. Richardson, Tian Gu, Juejun Hu, Clayton Fowler, Hualiang Zhang
Metasurfaces have provided a novel and promising platform for the realization of compact and large-scale optical devices.
1 code implementation • ACL 2021 • Yue Feng, Yang Wang, Hang Li
This paper is concerned with dialogue state tracking (DST) in a task-oriented dialogue system.
Ranked #1 on Multi-domain Dialogue State Tracking on SGD
1 code implementation • EMNLP 2021 • Tianqiao Liu, Qiang Fang, Wenbiao Ding, Hang Li, Zhongqin Wu, Zitao Liu
There is an increasing interest in the use of mathematical word problem (MWP) generation in educational assessment.
no code implementations • Findings (ACL) 2021 • Xinsong Zhang, Pengshuai Li, Hang Li
In fact, both fine-grained and coarse-grained tokenizations have advantages and disadvantages for learning of pre-trained language models.
no code implementations • 17 Jul 2020 • Hang Li, Dong Wei, Shilei Cao, Kai Ma, Liansheng Wang, Yefeng Zheng
If a superpixel intersects with the annotation boundary, we consider a high probability of uncertain labeling within this area.
no code implementations • 2 Jul 2020 • Marcel Hildebrandt, Hang Li, Rajat Koner, Volker Tresp, Stephan Günnemann
We propose a novel method that approaches the task by performing context-driven, sequential reasoning based on the objects and their semantic and spatial relationships present in the scene.
1 code implementation • ACL 2020 • Hayate Iso, chao qiao, Hang Li
We propose a novel text editing task, referred to as \textit{fact-based text editing}, in which the goal is to revise a given document to better describe the facts in a knowledge base (e. g., several triples).
Ranked #1 on Fact-based Text Editing on WebEdit
no code implementations • 21 May 2020 • Hang Li, Chen Ma, Wei Xu, Xue Liu
Building compact convolutional neural networks (CNNs) with reliable performance is a critical but challenging task, especially when deploying them in real-world applications.
5 code implementations • ACL 2020 • Shaohua Zhang, Haoran Huang, Jicong Liu, Hang Li
A state-of-the-art method for the task selects a character from a list of candidates for correction (including non-correction) at each position of the sentence on the basis of BERT, the language representation model.
no code implementations • 15 May 2020 • Hang Li, Zhiwei Wang, Jiliang Tang, Wenbiao Ding, Zitao Liu
Classroom activity detection (CAD) aims at accurately recognizing speaker roles (either teacher or student) in classrooms.
no code implementations • 21 Mar 2020 • Hang Li, Wenbiao Ding, Zitao Liu
We conduct a wide range of offline and online experiments to demonstrate the effectiveness of our approach.
1 code implementation • 1 Jan 2020 • Sensong An, Bowen Zheng, Mikhail Y. Shalaginov, Hong Tang, Hang Li, Li Zhou, Jun Ding, Anuradha Murthy Agarwal, Clara Rivero-Baleine, Myungkoo Kang, Kathleen A. Richardson, Tian Gu, Juejun Hu, Clayton Fowler, Hualiang Zhang
Metasurfaces have shown promising potentials in shaping optical wavefronts while remaining compact compared to bulky geometric optics devices.
no code implementations • 22 Oct 2019 • Hang Li, Yu Kang, Wenbiao Ding, Song Yang, Songfan Yang, Gale Yan Huang, Zitao Liu
The experimental results demonstrate the benefits of our approach on learning attention based neural network from classroom data with different modalities, and show our approach is able to outperform state-of-the-art baselines in terms of various evaluation metrics.
no code implementations • 1 Sep 2019 • Jiahao Chen, Hang Li, Wenxin Wang, Wenbiao Ding, Gale Yan Huang, Zitao Liu
To warn the unqualified instructors and ensure the overall education quality, we build a monitoring and alerting system by utilizing multimodal information from the online environment.
no code implementations • 13 Aug 2019 • Sensong An, Bowen Zheng, Hong Tang, Mikhail Y. Shalaginov, Li Zhou, Hang Li, Tian Gu, Juejun Hu, Clayton Fowler, Hualiang Zhang
Metasurfaces have enabled precise electromagnetic wave manipulation with strong potential to obtain unprecedented functionalities and multifunctional behavior in flat optical devices.
no code implementations • 8 Jun 2019 • Sensong An, Clayton Fowler, Bowen Zheng, Mikhail Y. Shalaginov, Hong Tang, Hang Li, Li Zhou, Jun Ding, Anuradha Murthy Agarwal, Clara Rivero-Baleine, Kathleen A. Richardson, Tian Gu, Juejun Hu, Hualiang Zhang
Metasurfaces have become a promising means for manipulating optical wavefronts in flat and high-performance optical devices.
no code implementations • 4 Jun 2019 • Xiaoying Zhang, Hong Xie, Hang Li, John C. S. Lui
Here, a key-term can relate to a subset of arms, for example, a category of articles in news recommendation.
no code implementations • 13 Mar 2019 • Shuai Ma, Jiahui Dai, Songtao Lu, Hang Li, Han Zhang, Chun Du, Shiyin Li
The dataset is available online, which contains eight types of modulated signals.
no code implementations • 25 Oct 2018 • Yilin Niu, chao qiao, Hang Li, Minlie Huang
Text similarity calculation is a fundamental problem in natural language processing and related fields.
1 code implementation • 16 Sep 2018 • Ziniu Hu, Yang Wang, Qu Peng, Hang Li
Although click data is widely used in search systems in practice, so far the inherent bias, most notably position bias, has prevented it from being used in training of a ranker for search, i. e., learning-to-rank.
no code implementations • 25 Jul 2018 • Qian Wang, Hang Li, Zhi Chen, Dou Zhao, Shuang Ye, Jiansheng Cai
In addition, we propose to use the convolutional recurrent neural network (CRNN)---a combination of the CNN and the RNN---to learn local and contextual information in CSI for user authentication.
no code implementations • EMNLP 2018 • Zichao Li, Xin Jiang, Lifeng Shang, Hang Li
The generator, built as a sequence-to-sequence learning model, can produce paraphrases given a sentence.
no code implementations • EMNLP 2017 • Piji Li, Wai Lam, Lidong Bing, Weiwei Guo, Hang Li
The attention weights are learned automatically by an unsupervised data reconstruction framework which can capture the sentence salience.
9 code implementations • 31 Jul 2017 • Zhenguo Li, Fengwei Zhou, Fei Chen, Hang Li
In contrast, meta-learning learns from many related tasks a meta-learner that can learn a new task more accurately and faster with fewer examples, where the choice of meta-learners is crucial.
1 code implementation • ACL 2017 • Hao Zhou, Zhaopeng Tu, Shu-Jian Huang, Xiaohua Liu, Hang Li, Jia-Jun Chen
In typical neural machine translation~(NMT), the decoder generates a sentence word by word, packing all linguistic granularities in the same time-scale of RNN.
no code implementations • SEMEVAL 2017 • Nabiha Asghar, Pascal Poupart, Xin Jiang, Hang Li
We propose an online, end-to-end, neural generative conversational model for open-domain dialogue.
no code implementations • ICML 2017 • Lili Mou, Zhengdong Lu, Hang Li, Zhi Jin
Building neural networks to query a knowledge base (a table) with natural language is an emerging research topic in deep learning.
1 code implementation • 7 Nov 2016 • Zhaopeng Tu, Yang Liu, Lifeng Shang, Xiaohua Liu, Hang Li
Although end-to-end Neural Machine Translation (NMT) has achieved remarkable progress in the past two years, it suffers from a major drawback: translations generated by NMT systems often lack of adequacy.
no code implementations • COLING 2016 • Fandong Meng, Zhengdong Lu, Hang Li, Qun Liu
Conventional attention-based Neural Machine Translation (NMT) conducts dynamic alignment in generating the target sentence.
no code implementations • 17 Oct 2016 • Xing Wang, Zhengdong Lu, Zhaopeng Tu, Hang Li, Deyi Xiong, Min Zhang
Neural Machine Translation (NMT) is a new approach to machine translation that has made great progress in recent years.
2 code implementations • TACL 2017 • Zhaopeng Tu, Yang Liu, Zhengdong Lu, Xiaohua Liu, Hang Li
In neural machine translation (NMT), generation of a target word depends on both source and target contexts.
no code implementations • EMNLP 2016 • Mingxuan Wang, Zhengdong Lu, Hang Li, Qun Liu
We propose to enhance the RNN decoder in a neural machine translator (NMT) with external memory, as a natural but powerful extension to the state in the decoding RNN.
no code implementations • 6 Jun 2016 • Yaohua Tang, Fandong Meng, Zhengdong Lu, Hang Li, Philip L. H. Yu
In this paper, we propose phraseNet, a neural machine translator with a phrase memory which stores phrase pairs in symbolic form, mined from corpus or specified by human experts.
no code implementations • NAACL 2016 • Long-Yue Wang, Zhaopeng Tu, Xiaojun Zhang, Hang Li, Andy Way, Qun Liu
Finally, we integrate the above outputs into our translation system to recall missing pronouns by both extracting rules from the DP-labelled training data and translating the DP-generated input sentences.
7 code implementations • ACL 2016 • Jiatao Gu, Zhengdong Lu, Hang Li, Victor O. K. Li
CopyNet can nicely integrate the regular way of word generation in the decoder with the new copying mechanism which can choose sub-sequences in the input sequence and put them at proper places in the output sequence.
3 code implementations • ACL 2016 • Zhaopeng Tu, Zhengdong Lu, Yang Liu, Xiaohua Liu, Hang Li
Attention mechanism has enhanced state-of-the-art Neural Machine Translation (NMT) by jointly learning to align and translate.
1 code implementation • WS 2016 • Jun Yin, Xin Jiang, Zhengdong Lu, Lifeng Shang, Hang Li, Xiaoming Li
Empirical study shows the proposed model can effectively deal with the variations of questions and answers, and generate right and natural answers by referring to the facts in the knowledge-base.
no code implementations • 3 Dec 2015 • Pengcheng Yin, Zhengdong Lu, Hang Li, Ben Kao
Neural Enquirer can be trained with gradient descent, with which not only the parameters of the controlling components and semantic parsing component, but also the embeddings of the tables and query words can be learned from scratch.
1 code implementation • 22 Aug 2015 • Baolin Peng, Zhengdong Lu, Hang Li, Kam-Fai Wong
For example, it improves the accuracy on Path Finding(10K) from 33. 4% [6] to over 98%.
no code implementations • 22 Jun 2015 • Fandong Meng, Zhengdong Lu, Zhaopeng Tu, Hang Li, Qun Liu
We propose DEEPMEMORY, a novel deep architecture for sequence-to-sequence learning, which performs the task through a series of nonlinear transformations from the representation of the input sequence (e. g., a Chinese sentence) to the final output sequence (e. g., translation to English).
no code implementations • 1 Jun 2015 • Lin Ma, Zhengdong Lu, Hang Li
We demonstrate the efficacy of our proposed model on the DAQUAR and COCO-QA datasets, which are two benchmark datasets for the image QA, with the performances significantly outperforming the state-of-the-art.
no code implementations • 28 Apr 2015 • Piji Li, Lidong Bing, Wai Lam, Hang Li, Yi Liao
We propose a new MDS paradigm called reader-aware multi-document summarization (RA-MDS).
3 code implementations • ICCV 2015 • Lin Ma, Zhengdong Lu, Lifeng Shang, Hang Li
In this paper, we propose multimodal convolutional neural networks (m-CNNs) for matching image and sentence.
Ranked #16 on Image Retrieval on Flickr30K 1K test
no code implementations • 17 Mar 2015 • Mingxuan Wang, Zhengdong Lu, Hang Li, Wenbin Jiang, Qun Liu
Different from previous work on neural network-based language modeling and generation (e. g., RNN or LSTM), we choose not to greedily summarize the history of words as a fixed length vector.
2 code implementations • NeurIPS 2014 • Baotian Hu, Zhengdong Lu, Hang Li, Qingcai Chen
Semantic matching is of central importance to many natural language tasks \cite{bordes2014semantic, RetrievalQA}.
Ranked #3 on Question Answering on SemEvalCQA
no code implementations • IJCNLP 2015 • Zhaopeng Tu, Baotian Hu, Zhengdong Lu, Hang Li
We propose a novel method for translation selection in statistical machine translation, in which a convolutional neural network is employed to judge the similarity between a phrase pair in two languages.
no code implementations • 9 Mar 2015 • Mingxuan Wang, Zhengdong Lu, Hang Li, Qun Liu
Many tasks in natural language processing, ranging from machine translation to question answering, can be reduced to the problem of matching two sentences or more generally two short texts.
4 code implementations • IJCNLP 2015 • Lifeng Shang, Zhengdong Lu, Hang Li
We propose Neural Responding Machine (NRM), a neural network-based response generator for Short-Text Conversation.
no code implementations • IJCNLP 2015 • Fandong Meng, Zhengdong Lu, Mingxuan Wang, Hang Li, Wenbin Jiang, Qun Liu
The recently proposed neural network joint model (NNJM) (Devlin et al., 2014) augments the n-gram target language model with a heuristically chosen source context window, achieving state-of-the-art performance in SMT.
no code implementations • 22 Oct 2014 • Jingbo Shang, Tianqi Chen, Hang Li, Zhengdong Lu, Yong Yu
In this paper, we tackle this challenge with a novel parallel and efficient algorithm for feature-based matrix factorization.
1 code implementation • 29 Aug 2014 • Zongcheng Ji, Zhengdong Lu, Hang Li
Human computer conversation is regarded as one of the most difficult problems in artificial intelligence.
no code implementations • NeurIPS 2013 • Zhengdong Lu, Hang Li
Many machine learning problems can be interpreted as learning for matching two types of objects (e. g., images and captions, users and products, queries and documents).
no code implementations • NeurIPS 2009 • Wei Chen, Tie-Yan Liu, Yanyan Lan, Zhi-Ming Ma, Hang Li
We show that these loss functions are upper bounds of the measure-based ranking errors.
no code implementations • NeurIPS 2009 • Fen Xia, Tie-Yan Liu, Hang Li
This paper aims to analyze whether existing listwise ranking methods are statistically consistent in the top-k setting.
no code implementations • NeurIPS 2008 • Tao Qin, Tie-Yan Liu, Xu-Dong Zhang, De-Sheng Wang, Hang Li
It can naturally represent the content information of objects as well as the relation information between objects, necessary for global ranking.