Search Results for author: Meng Fang

Found 65 papers, 39 papers with code

Fire Burns, Sword Cuts: Commonsense Inductive Bias for Exploration in Text-based Games

1 code implementation ACL 2022 Dongwon Ryu, Ehsan Shareghi, Meng Fang, Yunqiu Xu, Shirui Pan, Reza Haf

Text-based games (TGs) are exciting testbeds for developing deep reinforcement learning techniques due to their partially observed environments and large action spaces.

Efficient Exploration Inductive Bias +2

GOPlan: Goal-conditioned Offline Reinforcement Learning by Planning with Learned Models

no code implementations30 Oct 2023 Mianchu Wang, Rui Yang, Xi Chen, Meng Fang

Offline goal-conditioned RL (GCRL) offers a feasible paradigm to learn general-purpose policies from diverse and multi-task offline datasets.


CITB: A Benchmark for Continual Instruction Tuning

no code implementations23 Oct 2023 Zihan Zhang, Meng Fang, Ling Chen, Mohammad-Reza Namazi-Rad

In this work, we establish a CIT benchmark consisting of learning and evaluation protocols.

Continual Learning

Enhancing Conversational Search: Large Language Model-Aided Informative Query Rewriting

no code implementations15 Oct 2023 Fanghua Ye, Meng Fang, Shenghui Li, Emine Yilmaz

Furthermore, we propose distilling the rewriting capabilities of LLMs into smaller models to reduce rewriting latency.

Conversational Search Language Modelling +2

How Do Large Language Models Capture the Ever-changing World Knowledge? A Review of Recent Advances

1 code implementation11 Oct 2023 Zihan Zhang, Meng Fang, Ling Chen, Mohammad-Reza Namazi-Rad, Jun Wang

Although large language models (LLMs) are impressive in solving various tasks, they can quickly be outdated after deployment.

Towards Data-centric Graph Machine Learning: Review and Outlook

1 code implementation20 Sep 2023 Xin Zheng, Yixin Liu, Zhifeng Bao, Meng Fang, Xia Hu, Alan Wee-Chung Liew, Shirui Pan

Data-centric AI, with its primary focus on the collection, management, and utilization of data to drive AI models and applications, has attracted increasing attention in recent years.

Management Navigate

Where Would I Go Next? Large Language Models as Human Mobility Predictors

1 code implementation29 Aug 2023 Xinglei Wang, Meng Fang, Zichao Zeng, Tao Cheng

We posit that our research marks a significant paradigm shift in human mobility modelling, transitioning from building complex domain-specific models to harnessing general-purpose LLMs that yield accurate predictions through language instructions.

Eigensubspace of Temporal-Difference Dynamics and How It Improves Value Approximation in Reinforcement Learning

no code implementations29 Jun 2023 Qiang He, Tianyi Zhou, Meng Fang, Setareh Maghsudi

In ERC, we propose a regularizer that guides the approximation error tending towards the 1-eigensubspace, resulting in a more efficient and stable path of value approximation.

Reinforcement Learning (RL)

Enhancing Adversarial Training via Reweighting Optimization Trajectory

1 code implementation25 Jun 2023 Tianjin Huang, Shiwei Liu, Tianlong Chen, Meng Fang, Li Shen, Vlaod Menkovski, Lu Yin, Yulong Pei, Mykola Pechenizkiy

Despite the fact that adversarial training has become the de facto method for improving the robustness of deep neural networks, it is well-known that vanilla adversarial training suffers from daunting robust overfitting, resulting in unsatisfactory robust generalization.

Adversarial Robustness

Are Large Kernels Better Teachers than Transformers for ConvNets?

1 code implementation30 May 2023 Tianjin Huang, Lu Yin, Zhenyu Zhang, Li Shen, Meng Fang, Mykola Pechenizkiy, Zhangyang Wang, Shiwei Liu

We hereby carry out a first-of-its-kind study unveiling that modern large-kernel ConvNets, a compelling competitor to Vision Transformers, are remarkably more effective teachers for small-kernel ConvNets, due to more similar architectures.

Knowledge Distillation

Dynamic Sparsity Is Channel-Level Sparsity Learner

1 code implementation30 May 2023 Lu Yin, Gen Li, Meng Fang, Li Shen, Tianjin Huang, Zhangyang Wang, Vlado Menkovski, Xiaolong Ma, Mykola Pechenizkiy, Shiwei Liu

Dynamic sparse training (DST), as a leading sparse training approach, can train deep neural networks at high sparsity from scratch to match the performance of their dense counterparts.

Interpretable Reward Redistribution in Reinforcement Learning: A Causal Approach

1 code implementation28 May 2023 Yudi Zhang, Yali Du, Biwei Huang, Ziyan Wang, Jun Wang, Meng Fang, Mykola Pechenizkiy

While the majority of current approaches construct the reward redistribution in an uninterpretable manner, we propose to explicitly model the contributions of state and action from a causal perspective, resulting in an interpretable reward redistribution and preserving policy invariance.


CHBias: Bias Evaluation and Mitigation of Chinese Conversational Language Models

1 code implementation18 May 2023 Jiaxu Zhao, Meng Fang, Zijing Shi, Yitong Li, Ling Chen, Mykola Pechenizkiy

We evaluate two popular pretrained Chinese conversational models, CDial-GPT and EVA2. 0, using CHBias.

Response Generation

NLG Evaluation Metrics Beyond Correlation Analysis: An Empirical Metric Preference Checklist

7 code implementations15 May 2023 Iftitahu Ni'mah, Meng Fang, Vlado Menkovski, Mykola Pechenizkiy

Our proposed framework provides access: (i) for verifying whether automatic metrics are faithful to human preference, regardless of their correlation level to human; and (ii) for inspecting the strengths and limitations of NLG systems via pairwise evaluation.

Controllable Language Modelling Dialogue Generation +2

You Can Have Better Graph Neural Networks by Not Training Weights at All: Finding Untrained GNNs Tickets

1 code implementation28 Nov 2022 Tianjin Huang, Tianlong Chen, Meng Fang, Vlado Menkovski, Jiaxu Zhao, Lu Yin, Yulong Pei, Decebal Constantin Mocanu, Zhangyang Wang, Mykola Pechenizkiy, Shiwei Liu

Recent works have impressively demonstrated that there exists a subnetwork in randomly initialized convolutional neural networks (CNNs) that can match the performance of the fully trained dense networks at initialization, without any optimization of the weights of the network (i. e., untrained networks).

Out-of-Distribution Detection

TASA: Deceiving Question Answering Models by Twin Answer Sentences Attack

1 code implementation27 Oct 2022 Yu Cao, Dianqi Li, Meng Fang, Tianyi Zhou, Jun Gao, Yibing Zhan, DaCheng Tao

We present Twin Answer Sentences Attack (TASA), an adversarial attack method for question answering (QA) models that produces fluent and grammatical adversarial contexts while maintaining gold answers.

Adversarial Attack Question Answering

Learning Granularity-Unified Representations for Text-to-Image Person Re-identification

2 code implementations16 Jul 2022 Zhiyin Shao, Xinyu Zhang, Meng Fang, Zhifeng Lin, Jian Wang, Changxing Ding

In PGU, we adopt a set of shared and learnable prototypes as the queries to extract diverse and semantically aligned features for both modalities in the granularity-unified feature space, which further promotes the ReID performance.

Person Re-Identification Text based Person Retrieval +1

Dynamic Contrastive Distillation for Image-Text Retrieval

no code implementations4 Jul 2022 Jun Rao, Liang Ding, Shuhan Qi, Meng Fang, Yang Liu, Li Shen, DaCheng Tao

Although the vision-and-language pretraining (VLP) equipped cross-modal image-text retrieval (ITR) has achieved remarkable progress in the past two years, it suffers from a major drawback: the ever-increasing size of VLP models restricts its deployment to real-world search scenarios (where the high latency is unacceptable).

Contrastive Learning Metric Learning +3

Discourse-Aware Graph Networks for Textual Logical Reasoning

no code implementations4 Jul 2022 Yinya Huang, Lemao Liu, Kun Xu, Meng Fang, Liang Lin, Xiaodan Liang

In this work, we propose logic structural-constraint modeling to solve the logical reasoning QA and introduce discourse-aware graph networks (DAGNs).

graph construction Logical Reasoning +2

Superposing Many Tickets into One: A Performance Booster for Sparse Neural Network Training

no code implementations30 May 2022 Lu Yin, Vlado Menkovski, Meng Fang, Tianjin Huang, Yulong Pei, Mykola Pechenizkiy, Decebal Constantin Mocanu, Shiwei Liu

Recent works on sparse neural network training (sparse training) have shown that a compelling trade-off between performance and efficiency can be achieved by training intrinsically sparse neural networks from scratch.

Phrase-level Textual Adversarial Attack with Label Preservation

1 code implementation Findings (NAACL) 2022 Yibin Lei, Yu Cao, Dianqi Li, Tianyi Zhou, Meng Fang, Mykola Pechenizkiy

Generating high-quality textual adversarial examples is critical for investigating the pitfalls of natural language processing (NLP) models and further promoting their robustness.

Adversarial Attack

A Model-Agnostic Data Manipulation Method for Persona-based Dialogue Generation

1 code implementation ACL 2022 Yu Cao, Wei Bi, Meng Fang, Shuming Shi, DaCheng Tao

To alleviate the above data issues, we propose a data manipulation method, which is model-agnostic to be packed with any persona-based dialogue generation model to improve its performance.

Dialogue Generation

Rethinking Goal-conditioned Supervised Learning and Its Connection to Offline RL

1 code implementation ICLR 2022 Rui Yang, Yiming Lu, Wenzhe Li, Hao Sun, Meng Fang, Yali Du, Xiu Li, Lei Han, Chongjie Zhang

In this paper, we revisit the theoretical property of GCSL -- optimizing a lower bound of the goal reaching objective, and extend GCSL as a novel offline goal-conditioned RL algorithm.

Offline RL Reinforcement Learning (RL) +1

Goal Randomization for Playing Text-based Games without a Reward Function

no code implementations29 Sep 2021 Meng Fang, Yunqiu Xu, Yali Du, Ling Chen, Chengqi Zhang

In a variety of text-based games, we show that this simple method results in competitive performance for agents.

Decision Making text-based games

Lagrangian Generative Adversarial Imitation Learning with Safety

no code implementations29 Sep 2021 Zhihao Cheng, Li Shen, Meng Fang, Liu Liu, DaCheng Tao

Imitation Learning (IL) merely concentrates on reproducing expert behaviors and could take dangerous actions, which is unbearable in safety-critical scenarios.

Imitation Learning

Generalization in Text-based Games via Hierarchical Reinforcement Learning

1 code implementation Findings (EMNLP) 2021 Yunqiu Xu, Meng Fang, Ling Chen, Yali Du, Chengqi Zhang

Deep reinforcement learning provides a promising approach for text-based games in studying natural language communication between humans and artificial agents.

Hierarchical Reinforcement Learning reinforcement-learning +2

MHER: Model-based Hindsight Experience Replay

no code implementations1 Jul 2021 Rui Yang, Meng Fang, Lei Han, Yali Du, Feng Luo, Xiu Li

Replacing original goals with virtual goals generated from interaction with a trained dynamics model leads to a novel relabeling method, model-based relabeling (MBR).

Multi-Goal Reinforcement Learning reinforcement-learning +1

DAGN: Discourse-Aware Graph Network for Logical Reasoning

2 code implementations NAACL 2021 Yinya Huang, Meng Fang, Yu Cao, LiWei Wang, Xiaodan Liang

The model encodes discourse information as a graph with elementary discourse units (EDUs) and discourse relations, and learns the discourse-aware features via a graph network for downstream QA tasks.

Logical Reasoning

Towards Efficiently Diversifying Dialogue Generation via Embedding Augmentation

1 code implementation2 Mar 2021 Yu Cao, Liang Ding, Zhiliang Tian, Meng Fang

Dialogue generation models face the challenge of producing generic and repetitive responses.

Dialogue Generation

Self-Supervised Continuous Control without Policy Gradient

no code implementations1 Jan 2021 Hao Sun, Ziping Xu, Meng Fang, Yuhang Song, Jiechao Xiong, Bo Dai, Zhengyou Zhang, Bolei Zhou

Despite the remarkable progress made by the policy gradient algorithms in reinforcement learning (RL), sub-optimal policies usually result from the local exploration property of the policy gradient update.

Continuous Control Policy Gradient Methods +3

Learning Predictive Communication by Imagination in Networked System Control

no code implementations1 Jan 2021 Yali Du, Yifan Zhao, Meng Fang, Jun Wang, Gangyan Xu, Haifeng Zhang

Dealing with multi-agent control in networked systems is one of the biggest challenges in Reinforcement Learning (RL) and limited success has been presented compared to recent deep reinforcement learning in single-agent domain.

reinforcement-learning Reinforcement Learning (RL)

REM-Net: Recursive Erasure Memory Network for Commonsense Evidence Refinement

no code implementations24 Dec 2020 Yinya Huang, Meng Fang, Xunlin Zhan, Qingxing Cao, Xiaodan Liang, Liang Lin

It is crucial since the quality of the evidence is the key to answering commonsense questions, and even determines the upper bound on the QA systems performance.

Question Answering

TStarBot-X: An Open-Sourced and Comprehensive Study for Efficient League Training in StarCraft II Full Game

1 code implementation27 Nov 2020 Lei Han, Jiechao Xiong, Peng Sun, Xinghai Sun, Meng Fang, Qingwei Guo, Qiaobo Chen, Tengfei Shi, Hongsheng Yu, Xipeng Wu, Zhengyou Zhang

We show that with orders of less computation scale, a faithful reimplementation of AlphaStar's methods can not succeed and the proposed techniques are necessary to ensure TStarBot-X's competitive performance.

Imitation Learning Starcraft +1

TLeague: A Framework for Competitive Self-Play based Distributed Multi-Agent Reinforcement Learning

1 code implementation25 Nov 2020 Peng Sun, Jiechao Xiong, Lei Han, Xinghai Sun, Shuxing Li, Jiawei Xu, Meng Fang, Zhengyou Zhang

This poses non-trivial difficulties for researchers or engineers and prevents the application of MARL to a broader range of real-world problems.

Dota 2 Multi-agent Reinforcement Learning +4

On the Guaranteed Almost Equivalence between Imitation Learning from Observation and Demonstration

no code implementations16 Oct 2020 Zhihao Cheng, Liu Liu, Aishan Liu, Hao Sun, Meng Fang, DaCheng Tao

By contrast, this paper proves that LfO is almost equivalent to LfD in the deterministic robot environment, and more generally even in the robot environment with bounded randomness.

Imitation Learning

Zeroth-Order Supervised Policy Improvement

no code implementations11 Jun 2020 Hao Sun, Ziping Xu, Yuhang Song, Meng Fang, Jiechao Xiong, Bo Dai, Bolei Zhou

However, PG algorithms rely on exploiting the value function being learned with the first-order update locally, which results in limited sample efficiency.

Continuous Control Policy Gradient Methods +2

LIIR: Learning Individual Intrinsic Reward in Multi-Agent Reinforcement Learning

1 code implementation NeurIPS 2019 Yali Du, Lei Han, Meng Fang, Ji Liu, Tianhong Dai, DaCheng Tao

A great challenge in cooperative decentralized multi-agent reinforcement learning (MARL) is generating diversified behaviors for each individual agent when receiving only a team reward.

Multi-agent Reinforcement Learning reinforcement-learning +3

Curriculum-guided Hindsight Experience Replay

1 code implementation NeurIPS 2019 Meng Fang, Tianyi Zhou, Yali Du, Lei Han, Zhengyou Zhang

This ``Goal-and-Curiosity-driven Curriculum Learning'' leads to ``Curriculum-guided HER (CHER)'', which adaptively and dynamically controls the exploration-exploitation trade-off during the learning process via hindsight experience selection.

Unsupervised Domain Adaptation on Reading Comprehension

1 code implementation13 Nov 2019 Yu Cao, Meng Fang, Baosheng Yu, Joey Tianyi Zhou

On the other hand, it further reduces domain distribution discrepancy through conditional adversarial learning across domains.

Reading Comprehension Unsupervised Domain Adaptation

A Corpus-free State2Seq User Simulator for Task-oriented Dialogue

1 code implementation10 Sep 2019 Yutai Hou, Meng Fang, Wanxiang Che, Ting Liu

The framework builds a user simulator by first generating diverse dialogue data from templates and then build a new State2Seq user simulator on the data.

Learning to Solve a Rubik's Cube with a Dexterous Hand

1 code implementation26 Jul 2019 Tingguang Li, Weitao Xi, Meng Fang, Jia Xu, Max Qing-Hu Meng

We present a learning-based approach to solving a Rubik's cube with a multi-fingered dexterous hand.


Revisiting Metric Learning for Few-Shot Image Classification

no code implementations6 Jul 2019 Xiaomeng Li, Lequan Yu, Chi-Wing Fu, Meng Fang, Pheng-Ann Heng

However, the importance of feature embedding, i. e., exploring the relationship among training samples, is neglected.

Classification Few-Shot Image Classification +4

DHER: Hindsight Experience Replay for Dynamic Goals

1 code implementation ICLR 2019 Meng Fang, Cheng Zhou, Bei Shi, Boqing Gong, Jia Xu, Tong Zhang

Dealing with sparse rewards is one of the most important challenges in reinforcement learning (RL), especially when a goal is dynamic (e. g., to grasp a moving object).

Object Tracking Reinforcement Learning (RL)

BAG: Bi-directional Attention Entity Graph Convolutional Network for Multi-hop Reasoning Question Answering

1 code implementation NAACL 2019 Yu Cao, Meng Fang, DaCheng Tao

Graph convolutional networks are used to obtain a relation-aware representation of nodes for entity graphs built from documents with multi-level features.

Question Answering

Towards Query Efficient Black-box Attacks: An Input-free Perspective

1 code implementation9 Sep 2018 Yali Du, Meng Fang, Jin-Feng Yi, Jun Cheng, DaCheng Tao

First, we initialize an adversarial example with a gray color image on which every pixel has roughly the same importance for the target model.

A Deep Network for Arousal-Valence Emotion Prediction with Acoustic-Visual Cues

1 code implementation2 May 2018 Songyou Peng, Le Zhang, Yutong Ban, Meng Fang, Stefan Winkler

In this paper, we comprehensively describe the methodology of our submissions to the One-Minute Gradual-Emotion Behavior Challenge 2018.

Learning how to Active Learn: A Deep Reinforcement Learning Approach

1 code implementation EMNLP 2017 Meng Fang, Yuan Li, Trevor Cohn

Active learning aims to select a small subset of data for annotation such that a classifier learned on the data is highly accurate.

Active Learning named-entity-recognition +4

Model Transfer for Tagging Low-resource Languages using a Bilingual Dictionary

1 code implementation ACL 2017 Meng Fang, Trevor Cohn

Cross-lingual model transfer is a compelling and popular method for predicting annotations in a low-resource language, whereby parallel corpora provide a bridge to a high-resource language and its associated annotated corpora.

Active Learning Cross-Lingual Word Embeddings +1

Iterative Views Agreement: An Iterative Low-Rank based Structured Optimization Method to Multi-View Spectral Clustering

no code implementations19 Aug 2016 Yang Wang, Wenjie Zhang, Lin Wu, Xuemin Lin, Meng Fang, Shirui Pan

Multi-view spectral clustering, which aims at yielding an agreement or consensus data objects grouping across multi-views with their graph laplacian matrices, is a fundamental clustering problem.


Learning when to trust distant supervision: An application to low-resource POS tagging using cross-lingual projection

no code implementations CONLL 2016 Meng Fang, Trevor Cohn

Cross lingual projection of linguistic annotation suffers from many sources of bias and noise, leading to unreliable annotations that cannot be used directly.


Saliency Propagation From Simple to Difficult

no code implementations CVPR 2015 Chen Gong, DaCheng Tao, Wei Liu, Stephen J. Maybank, Meng Fang, Keren Fu, Jie Yang

In the teaching-to-learn step, a teacher is designed to arrange the regions from simple to difficult and then assign the simplest regions to the learner.

Saliency Detection

Transfer Learning across Networks for Collective Classification

no code implementations11 Mar 2014 Meng Fang, Jie Yin, Xingquan Zhu

In this paper, we propose a new transfer learning algorithm that attempts to transfer common latent structure features across the source and target networks.

Classification General Classification +1

Cannot find the paper you are looking for? You can Submit a new open access paper.