Search Results for author: Hang Li

Found 147 papers, 57 papers with code

Robust Multi-bit Text Watermark with LLM-based Paraphrasers

1 code implementation4 Dec 2024 Xiaojun Xu, Jinghan Jia, Yuanshun Yao, Yang Liu, Hang Li

To embed our multi-bit watermark, we use two paraphrasers alternatively to encode the pre-defined binary code at the sentence level.

Decoder Sentence

FedBiP: Heterogeneous One-Shot Federated Learning with Personalized Latent Diffusion Models

no code implementations7 Oct 2024 Haokun Chen, Hang Li, Yao Zhang, Gengyuan Zhang, Jinhe Bi, Philip Torr, Jindong Gu, Denis Krompass, Volker Tresp

However, directly applying pretrained LDM to heterogeneous OSFL results in significant distribution shifts in synthetic data, leading to performance degradation in classification models trained on such data.

Federated Learning

ErrorRadar: Benchmarking Complex Mathematical Reasoning of Multimodal Large Language Models Via Error Detection

no code implementations6 Oct 2024 Yibo Yan, Shen Wang, Jiahao Huo, Hang Li, Boyan Li, Jiamin Su, Xiong Gao, Yi-Fan Zhang, Tianlong Xu, Zhendong Chu, Aoxiao Zhong, Kun Wang, Hui Xiong, Philip S. Yu, Xuming Hu, Qingsong Wen

As the field of Multimodal Large Language Models (MLLMs) continues to evolve, their potential to revolutionize artificial intelligence is particularly promising, especially in addressing mathematical reasoning tasks.

Benchmarking Mathematical Reasoning

A LLM-Powered Automatic Grading Framework with Human-Level Guidelines Optimization

no code implementations3 Oct 2024 Yucheng Chu, Hang Li, Kaiqi Yang, Harry Shomer, Hui Liu, Yasemin Copur-Gencturk, Jiliang Tang

Open-ended short-answer questions (SAGs) have been widely recognized as a powerful tool for providing deeper insights into learners' responses in the context of learning analytics (LA).

Observe Then Act: Asynchronous Active Vision-Action Model for Robotic Manipulation

no code implementations23 Sep 2024 Guokang Wang, Hang Li, Shuyuan Zhang, Yanhong Liu, Huaping Liu

In real-world scenarios, many robotic manipulation tasks are hindered by occlusions and limited fields of view, posing significant challenges for passive observation-based models that rely on fixed or wrist-mounted cameras.

How the (Tensor-) Brain uses Embeddings and Embodiment to Encode Senses and Decode Symbols

no code implementations19 Sep 2024 Volker Tresp, Hang Li

The tensor brain has two major layers: the representation layer and the index layer.

Sub-graph Based Diffusion Model for Link Prediction

no code implementations13 Sep 2024 Hang Li, Wei Jin, Geri Skenderi, Harry Shomer, Wenzhuo Tang, Wenqi Fan, Jiliang Tang

In particular, we treat link prediction between a pair of nodes as a conditional likelihood estimation of its enclosing sub-graph.

Denoising Link Prediction

Knowledge Tagging with Large Language Model based Multi-Agent System

no code implementations12 Sep 2024 Hang Li, Tianlong Xu, Ethan Chang, Qingsong Wen

Knowledge tagging for questions is vital in modern intelligent educational applications, including learning progress diagnosis, practice question recommendations, and course content organization.

Language Modelling Large Language Model +1

Towards Achieving Human Parity on End-to-end Simultaneous Speech Translation via LLM Agent

1 code implementation31 Jul 2024 Shanbo Cheng, Zhichao Huang, Tom Ko, Hang Li, Ningxin Peng, Lu Xu, Qini Zhang

Aligned with professional human interpreters, we evaluate CLASI with a better human evaluation metric, valid information proportion (VIP), which measures the amount of information that can be successfully conveyed to the listeners.

Translation valid

Efffcient Sensing Parameter Estimation with Direct Clutter Mitigation in Perceptive Mobile Networks

no code implementations24 Jul 2024 Hang Li, Hongming Yang, Qinghua Guo, J. Andrew Zhang, Yang Xiang, Yashan Pang

In this work, we investigate sensing parameter estimation in the presence of clutter in perceptive mobile networks (PMNs) that integrate radar sensing into mobile communications.

Transformer for Multitemporal Hyperspectral Image Unmixing

no code implementations15 Jul 2024 Hang Li, Qiankun Dong, Xueshuo Xie, Xia Xu, Tao Li, Zhenwei Shi

To effectively perform multitemporal hyperspectral image unmixing, we introduce two key modules: the Global Awareness Module (GAM) and the Change Enhancement Module (CEM).

Semantic Feature Division Multiple Access for Multi-user Digital Interference Networks

no code implementations11 Jul 2024 Shuai Ma, Chuanhui Zhang, Bin Shen, Youlong Wu, Hang Li, Shiyin Li, Guangming Shi, Naofal Al-Dhahir

To address these challenges, in this paper, we propose a novel discrete semantic feature division multiple access (SFDMA) paradigm for multi-user digital interference networks.

Image Reconstruction

Knowledge Tagging System on Math Questions via LLMs with Flexible Demonstration Retriever

no code implementations19 Jun 2024 Hang Li, Tianlong Xu, Jiliang Tang, Qingsong Wen

Knowledge tagging for questions plays a crucial role in contemporary intelligent educational applications, including learning progress diagnosis, practice question recommendations, and course content organization.

Math Semantic Similarity +1

Toward Optimal LLM Alignments Using Two-Player Games

1 code implementation16 Jun 2024 Rui Zheng, Hongyi Guo, Zhihan Liu, Xiaoying Zhang, Yuanshun Yao, Xiaojun Xu, Zhaoran Wang, Zhiheng Xi, Tao Gui, Qi Zhang, Xuanjing Huang, Hang Li, Yang Liu

We theoretically demonstrate that this iterative reinforcement learning optimization converges to a Nash Equilibrium for the game induced by the agents.

reinforcement-learning Reinforcement Learning

Evaluating Uncertainty-based Failure Detection for Closed-Loop LLM Planners

no code implementations1 Jun 2024 Zhi Zheng, Qian Feng, Hang Li, Alois Knoll, Jianxiang Feng

As a general-purpose reasoning machine, LLMs or Multimodal Large Language Models (MLLMs) are promising for detecting failures.

AGILE: A Novel Reinforcement Learning Framework of LLM Agents

1 code implementation23 May 2024 Peiyuan Feng, Yichen He, Guanhua Huang, Yuan Lin, Hanchong Zhang, Yuchen Zhang, Hang Li

Our ablation study highlights the indispensability of memory, tools, consultation, reflection, and reinforcement learning in achieving the agent's strong performance.

Question Answering reinforcement-learning +2

Large Language Models for Education: A Survey and Outlook

no code implementations26 Mar 2024 Shen Wang, Tianlong Xu, Hang Li, Chaoli Zhang, Joleen Liang, Jiliang Tang, Philip S. Yu, Qingsong Wen

The advent of Large Language Models (LLMs) has brought in a new era of possibilities in the realm of education.

Survey

Automate Knowledge Concept Tagging on Math Questions with LLMs

no code implementations26 Mar 2024 Hang Li, Tianlong Xu, Jiliang Tang, Qingsong Wen

Knowledge concept tagging for questions plays a crucial role in contemporary intelligent educational applications, including learning progress diagnosis, practice question recommendations, and course content organization.

Few-Shot Learning Math

RadioGAT: A Joint Model-based and Data-driven Framework for Multi-band Radiomap Reconstruction via Graph Attention Networks

no code implementations25 Mar 2024 Xiaojie Li, Songyang Zhang, Hang Li, Xiaoyang Li, Lexi Xu, Haigao Xu, Hui Mei, Guangxu Zhu, Nan Qi, Ming Xiao

Multi-band radiomap reconstruction (MB-RMR) is a key component in wireless communications for tasks such as spectrum management and network planning.

Graph Attention

Content Knowledge Identification with Multi-Agent Large Language Models (LLMs)

no code implementations22 Mar 2024 Kaiqi Yang, Yucheng Chu, Taylor Darwin, Ahreum Han, Hang Li, Hongzhi Wen, Yasemin Copur-Gencturk, Jiliang Tang, Hui Liu

Teachers' mathematical content knowledge (CK) is of vital importance and need in teacher professional development (PD) programs.

Diversity

Driving Animatronic Robot Facial Expression From Speech

no code implementations19 Mar 2024 Boren Li, Hang Li, Hangxin Liu

Animatronic robots hold the promise of enabling natural human-robot interaction through lifelike facial expressions.

Motion Generation Motion Synthesis

Finding the Missing Data: A BERT-inspired Approach Against Package Loss in Wireless Sensing

1 code implementation19 Mar 2024 Zijian Zhao, TingWei Chen, Fanyi Meng, Hang Li, Xiaoyang Li, Guangxu Zhu

Despite the development of various deep learning methods for Wi-Fi sensing, package loss often results in noncontinuous estimation of the Channel State Information (CSI), which negatively impacts the performance of the learning models.

Action Classification Deep Learning +1

Large Language Models as Agents in Two-Player Games

no code implementations12 Feb 2024 Yang Liu, Peng Sun, Hang Li

By formally defining the training processes of large language models (LLMs), which usually encompasses pre-training, supervised fine-tuning, and reinforcement learning with human feedback, within a single and unified machine learning paradigm, we can glean pivotal insights for advancing LLM technologies.

Position reinforcement-learning +1

Bringing Generative AI to Adaptive Learning in Education

no code implementations2 Feb 2024 Hang Li, Tianlong Xu, Chaoli Zhang, Eason Chen, Jing Liang, Xing Fan, Haoyang Li, Jiliang Tang, Qingsong Wen

The recent surge in generative AI technologies, such as large language models and diffusion models, has boosted the development of AI applications in various domains, including science, finance, and education.

Position

Boximator: Generating Rich and Controllable Motions for Video Synthesis

1 code implementation2 Feb 2024 Jiawei Wang, Yuchen Zhang, Jiaxin Zou, Yan Zeng, Guoqiang Wei, Liping Yuan, Hang Li

Its robust motion controllability is validated by drastic increases in the bounding box alignment metric.

TPRF: A Transformer-based Pseudo-Relevance Feedback Model for Efficient and Effective Retrieval

no code implementations24 Jan 2024 Chuting Yu, Hang Li, Ahmed Mourad, Bevan Koopman, Guido Zuccon

This paper considers Pseudo-Relevance Feedback (PRF) methods for dense retrievers in a resource constrained environment such as that of cheap cloud instances or embedded systems (e. g., smartphones and smartwatches), where memory and CPU are limited and GPUs are not present.

Retrieval

ReFT: Reasoning with Reinforced Fine-Tuning

1 code implementation17 Jan 2024 Trung Quoc Luong, Xinbo Zhang, Zhanming Jie, Peng Sun, Xiaoran Jin, Hang Li

ReFT first warmups the model with SFT, and then employs on-line reinforcement learning, specifically the PPO algorithm in this paper, to further fine-tune the model, where an abundance of reasoning paths are automatically sampled given the question and the rewards are naturally derived from the ground-truth answers.

GSM8K Math +1

Speech Translation with Large Language Models: An Industrial Practice

no code implementations21 Dec 2023 Zhichao Huang, Rong Ye, Tom Ko, Qianqian Dong, Shanbo Cheng, Mingxuan Wang, Hang Li

Given the great success of large language models (LLMs) across various tasks, in this paper, we introduce LLM-ST, a novel and effective speech translation model constructed upon a pre-trained LLM.

Language Modelling Large Language Model +1

Unleashing Large-Scale Video Generative Pre-training for Visual Robot Manipulation

3 code implementations20 Dec 2023 Hongtao Wu, Ya Jing, Chilam Cheang, Guangzeng Chen, Jiafeng Xu, Xinghang Li, Minghuan Liu, Hang Li, Tao Kong

In this paper, we extend the scope of this effectiveness by showing that visual robot manipulation can significantly benefit from large-scale video generative pre-training.

Ranked #4 on Zero-shot Generalization on CALVIN (using extra training data)

Robot Manipulation Zero-shot Generalization

Feasibility Conditions for Mobile LiFi

no code implementations20 Dec 2023 Shuai Ma, Haihong Sheng, Junchang Sun, Hang Li, Xiaodong Liu, Chen Qiu, Majid Safari, Naofal Al-Dhahir, Shiyin Li

Then, we derive the expression of LiFi transmission rate based on the m-pulse-amplitude-modulation (M-PAM).

Evaluation of Infrastructure-based Warning System on Driving Behaviors-A Roundabout Study

no code implementations6 Dec 2023 Cong Zhang, Chi Tian, Tianfang Han, Hang Li, Yiheng Feng, Yunfeng Chen, Robert W. Proctor, Jiansong Zhang

A real-world roundabout in Ann Arbor, Michigan was built in the co-simulation platform as the study area, and the merging scenarios were investigated.

Edge-computing Navigate

Make Pixels Dance: High-Dynamic Video Generation

no code implementations CVPR 2024 Yan Zeng, Guoqiang Wei, Jiani Zheng, Jiaxin Zou, Yang Wei, Yuchen Zhang, Hang Li

Creating high-dynamic videos such as motion-rich actions and sophisticated visual effects poses a significant challenge in the field of artificial intelligence.

Text-to-Video Generation Video Generation

Enhancing Multimodal Compositional Reasoning of Visual Language Models with Generative Negative Mining

no code implementations7 Nov 2023 Ugur Sahin, Hang Li, Qadeer Khan, Daniel Cremers, Volker Tresp

Leveraging these generative hard negative samples, we significantly enhance VLMs' performance in tasks involving multimodal compositional reasoning.

Vision-Language Foundation Models as Effective Robot Imitators

no code implementations2 Nov 2023 Xinghang Li, Minghuan Liu, Hanbo Zhang, Cunjun Yu, Jie Xu, Hongtao Wu, Chilam Cheang, Ya Jing, Weinan Zhang, Huaping Liu, Hang Li, Tao Kong

We believe RoboFlamingo has the potential to be a cost-effective and easy-to-use solution for robotics manipulation, empowering everyone with the ability to fine-tune their own robotics policy.

Imitation Learning

Deep Concept Removal

no code implementations9 Oct 2023 Yegor Klochkov, Jean-Francois Ton, Ruocheng Guo, Yang Liu, Hang Li

We address the problem of concept removal in deep neural networks, aiming to learn representations that do not encode certain specified concepts (e. g., gender etc.)

Attribute Out-of-Distribution Generalization

Graph-level Representation Learning with Joint-Embedding Predictive Architectures

1 code implementation27 Sep 2023 Geri Skenderi, Hang Li, Jiliang Tang, Marco Cristani

They aim to learn an energy-based model by predicting the latent representation of a target signal y from the latent representation of a context signal x. JEPAs bypass the need for negative and positive samples, traditionally required by contrastive learning while avoiding the overfitting issues associated with generative pretraining.

Contrastive Learning Data Augmentation +3

Design of Chain-of-Thought in Math Problem Solving

1 code implementation20 Sep 2023 Zhanming Jie, Trung Quoc Luong, Xinbo Zhang, Xiaoran Jin, Hang Li

We also find that Python is a better choice of language than Wolfram for program CoTs.

Diversity GSM8K +1

Trustworthy LLMs: a Survey and Guideline for Evaluating Large Language Models' Alignment

1 code implementation10 Aug 2023 Yang Liu, Yuanshun Yao, Jean-Francois Ton, Xiaoying Zhang, Ruocheng Guo, Hao Cheng, Yegor Klochkov, Muhammad Faaiz Taufiq, Hang Li

However, a major challenge faced by practitioners is the lack of clear guidance on evaluating whether LLM outputs align with social norms, values, and regulations.

Fairness Models Alignment

Exploring the Potential of Large Language Models (LLMs) in Learning on Graphs

2 code implementations7 Jul 2023 Zhikai Chen, Haitao Mao, Hang Li, Wei Jin, Hongzhi Wen, Xiaochi Wei, Shuaiqiang Wang, Dawei Yin, Wenqi Fan, Hui Liu, Jiliang Tang

The most popular pipeline for learning on graphs with textual node attributes primarily relies on Graph Neural Networks (GNNs), and utilizes shallow text embedding as initial node representations, which has limitations in general knowledge and profound semantic understanding.

General Knowledge Node Classification

Inference-time Stochastic Ranking with Risk Control

no code implementations12 Jun 2023 Ruocheng Guo, Jean-François Ton, Yang Liu, Hang Li

Widely used deterministic LTR models can lead to unfair exposure distribution, especially when items with the same relevance receive slightly different ranking scores.

Fairness Learning-To-Rank

Sensing Aided Uplink Transmission in OTFS ISAC with Joint Parameter Association, Channel Estimation and Signal Detection

no code implementations19 May 2023 Xi Yang, Hang Li, Qinghua Guo, J. Andrew Zhang, Xiaojing Huang, Zhiqun Cheng

In this work, we study sensing-aided uplink transmission in an integrated sensing and communication (ISAC) vehicular network with the use of orthogonal time frequency space (OTFS) modulation.

Features Disentangled Semantic Broadcast Communication Networks

no code implementations3 Mar 2023 Shuai Ma, Weining Qiao, Youlong Wu, Hang Li, Guangming Shi, Dahua Gao, Yuanming Shi, Shiyin Li, Naofal Al-Dhahir

Instead of broadcasting all extracted features, the semantic encoder extracts the disentangled semantic features, and then only the users' intended semantic features are selected for broadcasting, which can further improve the transmission efficiency.

feature selection

Task-oriented Explainable Semantic Communications

no code implementations27 Feb 2023 Shuai Ma, Weining Qiao, Youlong Wu, Hang Li, Guangming Shi, Dahua Gao, Yuanming Shi, Shiyin Li, Naofal Al-Dhahir

Furthermore, based on the $\beta $-variational autoencoder ($\beta $-VAE), we propose a practical explainable semantic communication system design, which simultaneously achieves semantic features selection and is robust against semantic channel noise.

Semantic Communication

Generative Diffusion Models on Graphs: Methods and Applications

1 code implementation6 Feb 2023 Chengyi Liu, Wenqi Fan, Yunqing Liu, Jiatong Li, Hang Li, Hui Liu, Jiliang Tang, Qing Li

Given the great success of diffusion models in image generation, increasing efforts have been made to leverage these techniques to advance graph generation in recent years.

Denoising Graph Generation +3

Kainate receptor modulation by NETO2

no code implementations2 Feb 2023 Lingli He, Jiahui Sun, Yiwei Gao, Bin Li, Yuhang Wang, Yanli Dong, Weidong An, Hang Li, Bei Yang, Yuhan Ge, Xuejun Cai Zhang, Yun Stone Shi, Yan Zhao

Glutamate-gated kainate receptors (KARs) are ubiquitous in the central nervous system of vertebrates, mediate synaptic transmission on post-synapse, and modulate transmitter release on pre-synapse.

Disentangled Representation for Diversified Recommendations

1 code implementation13 Jan 2023 Xiaoying Zhang, Hongning Wang, Hang Li

This calls for a fine-grained understanding of a user's preferences over items, where one needs to recognize the user's choice is driven by the quality of the item itself, or the pre-selected attributes of the item.

Diversity

Do DALL-E and Flamingo Understand Each Other?

no code implementations ICCV 2023 Hang Li, Jindong Gu, Rajat Koner, Sahand Sharifzadeh, Volker Tresp

To study this question, we propose a reconstruction task where Flamingo generates a description for a given image and DALL-E uses this description as input to synthesize a new image.

Image Captioning Image Reconstruction +4

AgAsk: An Agent to Help Answer Farmer's Questions From Scientific Documents

1 code implementation21 Dec 2022 Bevan Koopman, Ahmed Mourad, Hang Li, Anton van der Vegt, Shengyao Zhuang, Simon Gibson, Yash Dang, David Lawrence, Guido Zuccon

On the basis of these needs we release an information retrieval test collection comprising real questions, a large collection of scientific documents split in passages, and ground truth relevance assessments indicating which passages are relevant to each question.

Information Retrieval Retrieval

Joint Beamforming and PD Orientation Design for Mobile Visible Light Communications

no code implementations21 Dec 2022 Shuai Ma, Jing Wang, Chun Du, Hang Li, Xiaodong Liu, Youlong Wu, Naofal Al-Dhahir, Shiyin Li

To address this challenge, we propose an alternating optimization algorithm to obtain the transmit beamforming and the PD orientation.

MeSH Suggester: A Library and System for MeSH Term Suggestion for Systematic Review Boolean Query Construction

1 code implementation18 Dec 2022 Shuai Wang, Hang Li, Guido Zuccon

One challenge to creating an effective systematic review Boolean query is the selection of effective MeSH Terms to include in the query.

X$^2$-VLM: All-In-One Pre-trained Model For Vision-Language Tasks

2 code implementations22 Nov 2022 Yan Zeng, Xinsong Zhang, Hang Li, Jiawei Wang, Jipeng Zhang, Wangchunshu Zhou

Vision language pre-training aims to learn alignments between vision and language from a large amount of data.

 Ranked #1 on Cross-Modal Retrieval on Flickr30k (using extra training data)

Cross-Modal Retrieval Image Captioning +7

Learning to Counterfactually Explain Recommendations

no code implementations17 Nov 2022 Yuanshun Yao, Chong Wang, Hang Li

The key idea is to train a surrogate model to learn the effect of removing a subset of user history on the recommendation.

counterfactual Recommendation Systems +1

Weak Proxies are Sufficient and Preferable for Fairness with Missing Sensitive Attributes

1 code implementation6 Oct 2022 Zhaowei Zhu, Yuanshun Yao, Jiankai Sun, Hang Li, Yang Liu

Our theoretical analyses show that directly using proxy models can give a false sense of (un)fairness.

Fairness

Forgetting Fast in Recommender Systems

no code implementations14 Aug 2022 Wenyan Liu, Juncheng Wan, Xiaoling Wang, Weinan Zhang, Dell Zhang, Hang Li

In this paper, we investigate fast machine unlearning techniques for recommender systems that can remove the effect of a small amount of training data from the recommendation model without incurring the full cost of retraining.

Machine Unlearning Recommendation Systems

Biologically Inspired Neural Path Finding

1 code implementation13 Jun 2022 Hang Li, Qadeer Khan, Volker Tresp, Daniel Cremers

The human brain can be considered to be a graphical structure comprising of tens of billions of biological neurons connected by synapses.

On Calibration of Graph Neural Networks for Node Classification

1 code implementation3 Jun 2022 Tong Liu, Yushan Liu, Marcel Hildebrandt, Mitchell Joblin, Hang Li, Volker Tresp

We investigate the calibration of graph neural networks for node classification, study the effect of existing post-processing calibration methods, and analyze the influence of model capacity, graph density, and a new loss function on calibration.

Classification Link Prediction +1

Directed Acyclic Transformer for Non-Autoregressive Machine Translation

1 code implementation16 May 2022 Fei Huang, Hao Zhou, Yang Liu, Hang Li, Minlie Huang

Non-autoregressive Transformers (NATs) significantly reduce the decoding latency by generating all tokens in parallel.

Knowledge Distillation Machine Translation +1

How does Feedback Signal Quality Impact Effectiveness of Pseudo Relevance Feedback for Passage Retrieval?

no code implementations12 May 2022 Hang Li, Ahmed Mourad, Bevan Koopman, Guido Zuccon

Pseudo-Relevance Feedback (PRF) assumes that the top results retrieved by a first-stage ranker are relevant to the original query and uses them to improve the query representation for a second round of retrieval.

Passage Retrieval Retrieval

To Interpolate or not to Interpolate: PRF, Dense and Sparse Retrievers

no code implementations30 Apr 2022 Hang Li, Shuai Wang, Shengyao Zhuang, Ahmed Mourad, Xueguang Ma, Jimmy Lin, Guido Zuccon

In this paper we consider the problem of combining the relevance signals from sparse and dense retrievers in the context of Pseudo Relevance Feedback (PRF).

Information Retrieval Language Modelling +1

Self-Supervised Audio-and-Text Pre-training with Extremely Low-Resource Parallel Data

1 code implementation10 Apr 2022 Yu Kang, Tianqiao Liu, Hang Li, Yang Hao, Wenbiao Ding

Our pre-training framework consists of the following components: (1) Intra-modal Denoising Auto-Encoding (IDAE), which is able to reconstruct input text (audio) representations from a noisy version of itself.

Denoising

Implicit Feedback for Dense Passage Retrieval: A Counterfactual Approach

1 code implementation1 Apr 2022 Shengyao Zhuang, Hang Li, Guido Zuccon

We then exploit such historic implicit interactions to improve the effectiveness of a DR. A key challenge that we study is the effect that biases in the click signal, such as position bias, have on the DRs.

counterfactual Passage Retrieval +2

A Neural-Symbolic Approach to Natural Language Understanding

2 code implementations20 Mar 2022 Zhixuan Liu, ZiHao Wang, Yuan Lin, Hang Li

Deep neural networks, empowered by pre-trained language models, have achieved remarkable results in natural language understanding (NLU) tasks.

Logical Reasoning Natural Language Inference +2

Bridge the Gap between Supervised and Unsupervised Learning for Fine-Grained Classification

no code implementations1 Mar 2022 Jiabao Wang, Yang Li, Xiu-Shen Wei, Hang Li, Zhuang Miao, Rui Zhang

Unsupervised learning technology has caught up with or even surpassed supervised learning technology in general object classification (GOC) and person re-identification (re-ID).

Clustering Contrastive Learning +3

Improving Query Representations for Dense Retrieval with Pseudo Relevance Feedback: A Reproducibility Study

1 code implementation13 Dec 2021 Hang Li, Shengyao Zhuang, Ahmed Mourad, Xueguang Ma, Jimmy Lin, Guido Zuccon

Finally, we contribute a study of the generalisability of the ANCE-PRF method when dense retrievers other than ANCE are used for the first round of retrieval and for encoding the PRF signal.

Retrieval

Disentangled Contrastive Learning on Graphs

no code implementations NeurIPS 2021 Haoyang Li, Xin Wang, Ziwei Zhang, Zehuan Yuan, Hang Li, Wenwu Zhu

Then we propose a novel factor-wise discrimination objective in a contrastive learning manner, which can force the factorized representations to independently reflect the expressive information from different latent factors.

Contrastive Learning Self-Supervised Learning

Multi-Grained Vision Language Pre-Training: Aligning Texts with Visual Concepts

1 code implementation16 Nov 2021 Yan Zeng, Xinsong Zhang, Hang Li

Most existing methods in vision language pre-training rely on object-centric features extracted through object detection and make fine-grained alignments between the extracted features and texts.

 Ranked #1 on Image Retrieval on Flickr30K 1K test (using extra training data)

Cross-Modal Retrieval Image Captioning +9

The Tensor Brain: A Unified Theory of Perception, Memory and Semantic Decoding

1 code implementation27 Sep 2021 Volker Tresp, Sahand Sharifzadeh, Hang Li, Dario Konopatzki, Yunpu Ma

Although memory appears to be about the past, its main purpose is to support the agent in the present and the future.

Decision Making Self-Supervised Learning

Secoco: Self-Correcting Encoding for Neural Machine Translation

no code implementations Findings (EMNLP) 2021 Tao Wang, Chengqi Zhao, Mingxuan Wang, Lei LI, Hang Li, Deyi Xiong

This paper presents Self-correcting Encoding (Secoco), a framework that effectively deals with input noise for robust neural machine translation by introducing self-correcting predictors.

Machine Translation NMT +1

Pseudo Relevance Feedback with Deep Language Models and Dense Retrievers: Successes and Pitfalls

1 code implementation25 Aug 2021 Hang Li, Ahmed Mourad, Shengyao Zhuang, Bevan Koopman, Guido Zuccon

Text-based PRF results show that the use of PRF had a mixed effect on deep rerankers across different datasets.

Retrieval

Multi-Task Learning based Online Dialogic Instruction Detection with Pre-trained Language Models

1 code implementation15 Jul 2021 Yang Hao, Hang Li, Wenbiao Ding, Zhongqin Wu, Jiliang Tang, Rose Luckin, Zitao Liu

In this work, we study computational approaches to detect online dialogic instructions, which are widely used to help students understand learning materials, and build effective study habits.

Multi-Task Learning

An Educational System for Personalized Teacher Recommendation in K-12 Online Classrooms

no code implementations15 Jul 2021 Jiahao Chen, Hang Li, Wenbiao Ding, Zitao Liu

In this paper, we propose a simple yet effective solution to build practical teacher recommender systems for online one-on-one classes.

Diversity Recommendation Systems

A Multimodal Machine Learning Framework for Teacher Vocal Delivery Evaluation

1 code implementation15 Jul 2021 Hang Li, Yu Kang, Yang Hao, Wenbiao Ding, Zhongqin Wu, Zitao Liu

The quality of vocal delivery is one of the key indicators for evaluating teacher enthusiasm, which has been widely accepted to be connected to the overall course qualities.

BIG-bench Machine Learning

Graphhopper: Multi-Hop Scene Graph Reasoning for Visual Question Answering

1 code implementation13 Jul 2021 Rajat Koner, Hang Li, Marcel Hildebrandt, Deepan Das, Volker Tresp, Stephan Günnemann

We conduct an experimental study on the challenging dataset GQA, based on both manually curated and automatically generated scene graphs.

Navigate Question Answering +1

Evaluating Document Coherence Modelling

no code implementations18 Mar 2021 Aili Shen, Meladel Mistica, Bahar Salehi, Hang Li, Timothy Baldwin, Jianzhong Qi

While pretrained language models ("LM") have driven impressive gains over morpho-syntactic and semantic tasks, their ability to model discourse and pragmatic phenomena is less clear.

Intrusion Detection Sentence

AMBERT: A Pre-trained Language Model with Multi-Grained Tokenization

no code implementations Findings (ACL) 2021 Xinsong Zhang, Pengshuai Li, Hang Li

In fact, both fine-grained and coarse-grained tokenizations have advantages and disadvantages for learning of pre-trained language models.

Language Modelling Natural Language Understanding

Superpixel-Guided Label Softening for Medical Image Segmentation

no code implementations17 Jul 2020 Hang Li, Dong Wei, Shilei Cao, Kai Ma, Liansheng Wang, Yefeng Zheng

If a superpixel intersects with the annotation boundary, we consider a high probability of uncertain labeling within this area.

Image Segmentation Medical Image Analysis +3

Scene Graph Reasoning for Visual Question Answering

no code implementations2 Jul 2020 Marcel Hildebrandt, Hang Li, Rajat Koner, Volker Tresp, Stephan Günnemann

We propose a novel method that approaches the task by performing context-driven, sequential reasoning based on the objects and their semantic and spatial relationships present in the scene.

Navigate Question Answering +1

Fact-based Text Editing

1 code implementation ACL 2020 Hayate Iso, chao qiao, Hang Li

We propose a novel text editing task, referred to as \textit{fact-based text editing}, in which the goal is to revise a given document to better describe the facts in a knowledge base (e. g., several triples).

Decoder Fact-based Text Editing

Feature Statistics Guided Efficient Filter Pruning

no code implementations21 May 2020 Hang Li, Chen Ma, Wei Xu, Xue Liu

Building compact convolutional neural networks (CNNs) with reliable performance is a critical but challenging task, especially when deploying them in real-world applications.

Diversity

Spelling Error Correction with Soft-Masked BERT

5 code implementations ACL 2020 Shaohua Zhang, Haoran Huang, Jicong Liu, Hang Li

A state-of-the-art method for the task selects a character from a list of candidates for correction (including non-correction) at each position of the sentence on the basis of BERT, the language representation model.

Chinese Spelling Error Correction Language Modelling +2

Siamese Neural Networks for Class Activity Detection

no code implementations15 May 2020 Hang Li, Zhiwei Wang, Jiliang Tang, Wenbiao Ding, Zitao Liu

Classroom activity detection (CAD) aims at accurately recognizing speaker roles (either teacher or student) in classrooms.

Action Detection Activity Detection

Identifying At-Risk K-12 Students in Multimodal Online Environments: A Machine Learning Approach

no code implementations21 Mar 2020 Hang Li, Wenbiao Ding, Zitao Liu

We conduct a wide range of offline and online experiments to demonstrate the effectiveness of our approach.

BIG-bench Machine Learning

Multimodal Learning For Classroom Activity Detection

no code implementations22 Oct 2019 Hang Li, Yu Kang, Wenbiao Ding, Song Yang, Songfan Yang, Gale Yan Huang, Zitao Liu

The experimental results demonstrate the benefits of our approach on learning attention based neural network from classroom data with different modalities, and show our approach is able to outperform state-of-the-art baselines in terms of various evaluation metrics.

Action Detection Activity Detection

A Multimodal Alerting System for Online Class Quality Assurance

no code implementations1 Sep 2019 Jiahao Chen, Hang Li, Wenxin Wang, Wenbiao Ding, Gale Yan Huang, Zitao Liu

To warn the unqualified instructors and ensure the overall education quality, we build a monitoring and alerting system by utilizing multimodal information from the online environment.

Multifunctional Metasurface Design with a Generative Adversarial Network

no code implementations13 Aug 2019 Sensong An, Bowen Zheng, Hong Tang, Mikhail Y. Shalaginov, Li Zhou, Hang Li, Tian Gu, Juejun Hu, Clayton Fowler, Hualiang Zhang

Metasurfaces have enabled precise electromagnetic wave manipulation with strong potential to obtain unprecedented functionalities and multifunctional behavior in flat optical devices.

Generative Adversarial Network

Conversational Contextual Bandit: Algorithm and Application

no code implementations4 Jun 2019 Xiaoying Zhang, Hong Xie, Hang Li, John C. S. Lui

Here, a key-term can relate to a subset of arms, for example, a category of articles in news recommendation.

News Recommendation Recommendation Systems

Word Embedding based Edit Distance

no code implementations25 Oct 2018 Yilin Niu, chao qiao, Hang Li, Minlie Huang

Text similarity calculation is a fundamental problem in natural language processing and related fields.

text similarity

Unbiased LambdaMART: An Unbiased Pairwise Learning-to-Rank Algorithm

1 code implementation16 Sep 2018 Ziniu Hu, Yang Wang, Qu Peng, Hang Li

Although click data is widely used in search systems in practice, so far the inherent bias, most notably position bias, has prevented it from being used in training of a ranker for search, i. e., learning-to-rank.

Learning-To-Rank Position

Supervised and Semi-Supervised Deep Neural Networks for CSI-Based Authentication

no code implementations25 Jul 2018 Qian Wang, Hang Li, Zhi Chen, Dou Zhao, Shuang Ye, Jiansheng Cai

In addition, we propose to use the convolutional recurrent neural network (CRNN)---a combination of the CNN and the RNN---to learn local and contextual information in CSI for user authentication.

Meta-SGD: Learning to Learn Quickly for Few-Shot Learning

9 code implementations31 Jul 2017 Zhenguo Li, Fengwei Zhou, Fei Chen, Hang Li

In contrast, meta-learning learns from many related tasks a meta-learner that can learn a new task more accurately and faster with fewer examples, where the choice of meta-learners is crucial.

Few-Shot Learning reinforcement-learning +2

Chunk-Based Bi-Scale Decoder for Neural Machine Translation

1 code implementation ACL 2017 Hao Zhou, Zhaopeng Tu, Shu-Jian Huang, Xiaohua Liu, Hang Li, Jia-Jun Chen

In typical neural machine translation~(NMT), the decoder generates a sentence word by word, packing all linguistic granularities in the same time-scale of RNN.

Decoder Machine Translation +3

Coupling Distributed and Symbolic Execution for Natural Language Queries

no code implementations ICML 2017 Lili Mou, Zhengdong Lu, Hang Li, Zhi Jin

Building neural networks to query a knowledge base (a table) with natural language is an emerging research topic in deep learning.

Natural Language Queries

Neural Machine Translation with Reconstruction

1 code implementation7 Nov 2016 Zhaopeng Tu, Yang Liu, Lifeng Shang, Xiaohua Liu, Hang Li

Although end-to-end Neural Machine Translation (NMT) has achieved remarkable progress in the past two years, it suffers from a major drawback: translations generated by NMT systems often lack of adequacy.

Decoder Machine Translation +3

Interactive Attention for Neural Machine Translation

no code implementations COLING 2016 Fandong Meng, Zhengdong Lu, Hang Li, Qun Liu

Conventional attention-based Neural Machine Translation (NMT) conducts dynamic alignment in generating the target sentence.

Decoder Machine Translation +3

Neural Machine Translation Advised by Statistical Machine Translation

no code implementations17 Oct 2016 Xing Wang, Zhengdong Lu, Zhaopeng Tu, Hang Li, Deyi Xiong, Min Zhang

Neural Machine Translation (NMT) is a new approach to machine translation that has made great progress in recent years.

Machine Translation NMT +1

Context Gates for Neural Machine Translation

2 code implementations TACL 2017 Zhaopeng Tu, Yang Liu, Zhengdong Lu, Xiaohua Liu, Hang Li

In neural machine translation (NMT), generation of a target word depends on both source and target contexts.

Machine Translation NMT +1

Memory-enhanced Decoder for Neural Machine Translation

no code implementations EMNLP 2016 Mingxuan Wang, Zhengdong Lu, Hang Li, Qun Liu

We propose to enhance the RNN decoder in a neural machine translator (NMT) with external memory, as a natural but powerful extension to the state in the decoding RNN.

Decoder Machine Translation +3

Neural Machine Translation with External Phrase Memory

no code implementations6 Jun 2016 Yaohua Tang, Fandong Meng, Zhengdong Lu, Hang Li, Philip L. H. Yu

In this paper, we propose phraseNet, a neural machine translator with a phrase memory which stores phrase pairs in symbolic form, mined from corpus or specified by human experts.

Decoder Machine Translation +2

A Novel Approach to Dropped Pronoun Translation

no code implementations NAACL 2016 Long-Yue Wang, Zhaopeng Tu, Xiaojun Zhang, Hang Li, Andy Way, Qun Liu

Finally, we integrate the above outputs into our translation system to recall missing pronouns by both extracting rules from the DP-labelled training data and translating the DP-generated input sentences.

Machine Translation Translation

Incorporating Copying Mechanism in Sequence-to-Sequence Learning

7 code implementations ACL 2016 Jiatao Gu, Zhengdong Lu, Hang Li, Victor O. K. Li

CopyNet can nicely integrate the regular way of word generation in the decoder with the new copying mechanism which can choose sub-sequences in the input sequence and put them at proper places in the output sequence.

Decoder Text Summarization

Modeling Coverage for Neural Machine Translation

3 code implementations ACL 2016 Zhaopeng Tu, Zhengdong Lu, Yang Liu, Xiaohua Liu, Hang Li

Attention mechanism has enhanced state-of-the-art Neural Machine Translation (NMT) by jointly learning to align and translate.

Machine Translation NMT +1

Neural Generative Question Answering

1 code implementation WS 2016 Jun Yin, Xin Jiang, Zhengdong Lu, Lifeng Shang, Hang Li, Xiaoming Li

Empirical study shows the proposed model can effectively deal with the variations of questions and answers, and generate right and natural answers by referring to the facts in the knowledge-base.

Decoder Generative Question Answering +1

Neural Enquirer: Learning to Query Tables with Natural Language

no code implementations3 Dec 2015 Pengcheng Yin, Zhengdong Lu, Hang Li, Ben Kao

Neural Enquirer can be trained with gradient descent, with which not only the parameters of the controlling components and semantic parsing component, but also the embeddings of the tables and query words can be learned from scratch.

Semantic Parsing

Towards Neural Network-based Reasoning

1 code implementation22 Aug 2015 Baolin Peng, Zhengdong Lu, Hang Li, Kam-Fai Wong

For example, it improves the accuracy on Path Finding(10K) from 33. 4% [6] to over 98%.

A Deep Memory-based Architecture for Sequence-to-Sequence Learning

no code implementations22 Jun 2015 Fandong Meng, Zhengdong Lu, Zhaopeng Tu, Hang Li, Qun Liu

We propose DEEPMEMORY, a novel deep architecture for sequence-to-sequence learning, which performs the task through a series of nonlinear transformations from the representation of the input sequence (e. g., a Chinese sentence) to the final output sequence (e. g., translation to English).

Machine Translation Sentence +1

Learning to Answer Questions From Image Using Convolutional Neural Network

no code implementations1 Jun 2015 Lin Ma, Zhengdong Lu, Hang Li

We demonstrate the efficacy of our proposed model on the DAQUAR and COCO-QA datasets, which are two benchmark datasets for the image QA, with the performances significantly outperforming the state-of-the-art.

General Classification Question Answering +2

$gen$CNN: A Convolutional Architecture for Word Sequence Prediction

no code implementations17 Mar 2015 Mingxuan Wang, Zhengdong Lu, Hang Li, Wenbin Jiang, Qun Liu

Different from previous work on neural network-based language modeling and generation (e. g., RNN or LSTM), we choose not to greedily summarize the history of words as a fixed length vector.

Language Modelling Machine Translation +3

Context-Dependent Translation Selection Using Convolutional Neural Network

no code implementations IJCNLP 2015 Zhaopeng Tu, Baotian Hu, Zhengdong Lu, Hang Li

We propose a novel method for translation selection in statistical machine translation, in which a convolutional neural network is employed to judge the similarity between a phrase pair in two languages.

Machine Translation Semantic Similarity +3

Syntax-based Deep Matching of Short Texts

no code implementations9 Mar 2015 Mingxuan Wang, Zhengdong Lu, Hang Li, Qun Liu

Many tasks in natural language processing, ranging from machine translation to question answering, can be reduced to the problem of matching two sentences or more generally two short texts.

Machine Translation Question Answering +1

Neural Responding Machine for Short-Text Conversation

4 code implementations IJCNLP 2015 Lifeng Shang, Zhengdong Lu, Hang Li

We propose Neural Responding Machine (NRM), a neural network-based response generator for Short-Text Conversation.

Decoder Retrieval +1

Encoding Source Language with Convolutional Neural Network for Machine Translation

no code implementations IJCNLP 2015 Fandong Meng, Zhengdong Lu, Mingxuan Wang, Hang Li, Wenbin Jiang, Qun Liu

The recently proposed neural network joint model (NNJM) (Devlin et al., 2014) augments the n-gram target language model with a heuristically chosen source context window, achieving state-of-the-art performance in SMT.

Language Modelling Machine Translation +2

A Parallel and Efficient Algorithm for Learning to Match

no code implementations22 Oct 2014 Jingbo Shang, Tianqi Chen, Hang Li, Zhengdong Lu, Yong Yu

In this paper, we tackle this challenge with a novel parallel and efficient algorithm for feature-based matrix factorization.

Collaborative Filtering Link Prediction

An Information Retrieval Approach to Short Text Conversation

1 code implementation29 Aug 2014 Zongcheng Ji, Zhengdong Lu, Hang Li

Human computer conversation is regarded as one of the most difficult problems in artificial intelligence.

Information Retrieval Retrieval +1

A Deep Architecture for Matching Short Texts

no code implementations NeurIPS 2013 Zhengdong Lu, Hang Li

Many machine learning problems can be interpreted as learning for matching two types of objects (e. g., images and captions, users and products, queries and documents).

Statistical Consistency of Top-k Ranking

no code implementations NeurIPS 2009 Fen Xia, Tie-Yan Liu, Hang Li

This paper aims to analyze whether existing listwise ranking methods are statistically consistent in the top-k setting.

Information Retrieval Retrieval

Global Ranking Using Continuous Conditional Random Fields

no code implementations NeurIPS 2008 Tao Qin, Tie-Yan Liu, Xu-Dong Zhang, De-Sheng Wang, Hang Li

It can naturally represent the content information of objects as well as the relation information between objects, necessary for global ranking.

Information Retrieval Learning-To-Rank +1

Cannot find the paper you are looking for? You can Submit a new open access paper.