Search Results for author: Ji-Rong Wen

Found 111 papers, 50 papers with code

Finding the Dominant Winning Ticket in Pre-Trained Language Models

no code implementations Findings (ACL) 2022 Zhuocheng Gong, Di He, Yelong Shen, Tie-Yan Liu, Weizhu Chen, Dongyan Zhao, Ji-Rong Wen, Rui Yan

Empirically, we show that (a) the dominant winning ticket can achieve performance that is comparable with that of the full-parameter model, (b) the dominant winning ticket is transferable across different tasks, (c) and the dominant winning ticket has a natural structure within each parameter matrix.

There Are a Thousand Hamlets in a Thousand People’s Eyes: Enhancing Knowledge-grounded Dialogue with Personal Memory

no code implementations ACL 2022 Tingchen Fu, Xueliang Zhao, Chongyang Tao, Ji-Rong Wen, Rui Yan

Knowledge-grounded conversation (KGC) shows great potential in building an engaging and knowledgeable chatbot, and knowledge selection is a key ingredient in it.


MVP: Multi-task Supervised Pre-training for Natural Language Generation

2 code implementations24 Jun 2022 Tianyi Tang, Junyi Li, Wayne Xin Zhao, Ji-Rong Wen

Motivated by the success of supervised pre-training, we propose Multi-task superVised Pre-training (MVP) for natural language generation.

Text Generation

Towards Unified Conversational Recommender Systems via Knowledge-Enhanced Prompt Learning

1 code implementation19 Jun 2022 Xiaolei Wang, Kun Zhou, Ji-Rong Wen, Wayne Xin Zhao

Our approach unifies the recommendation and conversation subtasks into the prompt learning paradigm, and utilizes knowledge-enhanced prompts based on a fixed pre-trained language model (PLM) to fulfill both subtasks in a unified approach.

Language Modelling Recommendation Systems

RecBole 2.0: Towards a More Up-to-Date Recommendation Library

2 code implementations15 Jun 2022 Wayne Xin Zhao, Yupeng Hou, Xingyu Pan, Chen Yang, Zeyu Zhang, Zihan Lin, Jingsen Zhang, Shuqing Bian, Jiakai Tang, Wenqi Sun, Yushuo Chen, Lanling Xu, Gaowei Zhang, Zhen Tian, Changxin Tian, Shanlei Mu, Xinyan Fan, Xu Chen, Ji-Rong Wen

In order to support the study of recent advances in recommender systems, this paper presents an extended recommendation library consisting of eight packages for up-to-date topics and architectures.

Data Augmentation Fairness +2

JiuZhang: A Chinese Pre-trained Language Model for Mathematical Problem Understanding

1 code implementation13 Jun 2022 Wayne Xin Zhao, Kun Zhou, Zheng Gong, Beichen Zhang, Yuanhang Zhou, Jing Sha, Zhigang Chen, Shijin Wang, Cong Liu, Ji-Rong Wen

Considering the complex nature of mathematical texts, we design a novel curriculum pre-training approach for improving the learning of mathematical PLMs, consisting of both basic and advanced courses.

Language Modelling

Towards Universal Sequence Representation Learning for Recommender Systems

1 code implementation13 Jun 2022 Yupeng Hou, Shanlei Mu, Wayne Xin Zhao, Yaliang Li, Bolin Ding, Ji-Rong Wen

In order to develop effective sequential recommenders, a series of sequence representation learning (SRL) methods are proposed to model historical user behaviors.

Recommendation Systems Representation Learning

Feature-aware Diversified Re-ranking with Disentangled Representations for Relevant Recommendation

no code implementations10 Jun 2022 Zihan Lin, Hui Wang, Jingshu Mao, Wayne Xin Zhao, Cheng Wang, Peng Jiang, Ji-Rong Wen

Relevant recommendation is a special recommendation scenario which provides relevant items when users express interests on one target item (e. g., click, like and purchase).

Disentanglement Re-Ranking

Negative Sampling for Contrastive Representation Learning: A Review

no code implementations1 Jun 2022 Lanling Xu, Jianxun Lian, Wayne Xin Zhao, Ming Gong, Linjun Shou, Daxin Jiang, Xing Xie, Ji-Rong Wen

The learn-to-compare paradigm of contrastive representation learning (CRL), which compares positive samples with negative ones for representation learning, has achieved great success in a wide range of domains, including natural language processing, computer vision, information retrieval and graph learning.

Computer Vision Graph Learning +3

Learning to Transfer Prompts for Text Generation

no code implementations3 May 2022 Junyi Li, Tianyi Tang, Jian-Yun Nie, Ji-Rong Wen, Wayne Xin Zhao

First, PTG learns a set of source prompts for various source generation tasks and then transfers these prompts as target prompts to perform target generation tasks.

Pretrained Language Models Text Generation

Debiased Contrastive Learning of Unsupervised Sentence Representations

1 code implementation ACL 2022 Kun Zhou, Beichen Zhang, Wayne Xin Zhao, Ji-Rong Wen

In DCLR, we design an instance weighting method to punish false negatives and generate noise-based negatives to guarantee the uniformity of the representation space.

Contrastive Learning Semantic Textual Similarity

A Thorough Examination on Zero-shot Dense Retrieval

no code implementations27 Apr 2022 Ruiyang Ren, Yingqi Qu, Jing Liu, Wayne Xin Zhao, Qifei Wu, Yuchen Ding, Hua Wu, Haifeng Wang, Ji-Rong Wen

Recent years have witnessed the significant advance in dense retrieval (DR) based on powerful pre-trained language models (PLM).

Less is More: Learning to Refine Dialogue History for Personalized Dialogue Generation

no code implementations18 Apr 2022 Hanxun Zhong, Zhicheng Dou, Yutao Zhu, Hongjin Qian, Ji-Rong Wen

Existing personalized dialogue systems have tried to extract user profiles from dialogue history to guide personalized response generation.

Dialogue Generation Response Generation

COTS: Collaborative Two-Stream Vision-Language Pre-Training Model for Cross-Modal Retrieval

no code implementations CVPR 2022 Haoyu Lu, Nanyi Fei, Yuqi Huo, Yizhao Gao, Zhiwu Lu, Ji-Rong Wen

Under a fair comparison setting, our COTS achieves the highest performance among all two-stream methods and comparable performance (but with 10, 800X faster in inference) w. r. t.

Contrastive Learning Cross-Modal Retrieval +3

Leveraging Search History for Improving Person-Job Fit

no code implementations27 Mar 2022 Yupeng Hou, Xingyu Pan, Wayne Xin Zhao, Shuqing Bian, Yang song, Tao Zhang, Ji-Rong Wen

As the core technique of online recruitment platforms, person-job fit can improve hiring efficiency by accurately matching job positions with qualified candidates.

Text Matching

Learning to Answer Questions in Dynamic Audio-Visual Scenarios

no code implementations CVPR 2022 Guangyao Li, Yake Wei, Yapeng Tian, Chenliang Xu, Ji-Rong Wen, Di Hu

In this paper, we focus on the Audio-Visual Question Answering (AVQA) task, which aims to answer questions regarding different visual objects, sounds, and their associations in videos.

Question Answering Scene Understanding +1

MISC: A MIxed Strategy-Aware Model Integrating COMET for Emotional Support Conversation

1 code implementation ACL 2022 Quan Tu, Yanran Li, Jianwei Cui, Bin Wang, Ji-Rong Wen, Rui Yan

Applying existing methods to emotional support conversation -- which provides valuable assistance to people who are in need -- has two major limitations: (a) they generally employ a conversation-level emotion label, which is too coarse-grained to capture user's instant mental state; (b) most of them focus on expressing empathy in the response(s) rather than gradually reducing user's distress.

Neural Graph Matching for Pre-training Graph Neural Networks

1 code implementation3 Mar 2022 Yupeng Hou, Binbin Hu, Wayne Xin Zhao, Zhiqiang Zhang, Jun Zhou, Ji-Rong Wen

In this way, we can learn adaptive representations for a given graph when paired with different graphs, and both node- and graph-level characteristics are naturally considered in a single pre-training task.

Graph Matching

Parameter-Efficient Mixture-of-Experts Architecture for Pre-trained Language Models

no code implementations2 Mar 2022 Ze-Feng Gao, Peiyu Liu, Wayne Xin Zhao, Zhong-Yi Lu, Ji-Rong Wen

With the decomposed MPO structure, we can reduce the parameters of the original MoE architecture by sharing a global central tensor across experts and keeping expert-specific auxiliary tensors.

Multi-Task Learning

Filter-enhanced MLP is All You Need for Sequential Recommendation

1 code implementation28 Feb 2022 Kun Zhou, Hui Yu, Wayne Xin Zhao, Ji-Rong Wen

Recently, deep neural networks such as RNN, CNN and Transformer have been applied in the task of sequential recommendation, which aims to capture the dynamic preference characteristics from logged user behavior data for accurate recommendation.

Sequential Recommendation

Measuring "Why" in Recommender Systems: a Comprehensive Survey on the Evaluation of Explainable Recommendation

no code implementations14 Feb 2022 Xu Chen, Yongfeng Zhang, Ji-Rong Wen

Beyond summarizing the previous work, we also analyze the (dis)advantages of existing evaluation methods and provide a series of guidelines on how to select them.

Recommendation Systems

A Model-Agnostic Causal Learning Framework for Recommendation using Search Data

1 code implementation9 Feb 2022 Zihua Si, Xueran Han, Xiao Zhang, Jun Xu, Yue Yin, Yang song, Ji-Rong Wen

In this paper, we propose a model-agnostic framework named IV4Rec that can effectively decompose the embedding vectors into these two parts, hence enhancing recommendation results.

Recommendation Systems

Convolutional Neural Networks on Graphs with Chebyshev Approximation, Revisited

no code implementations4 Feb 2022 Mingguo He, Zhewei Wei, Ji-Rong Wen

GPR-GNN and BernNet demonstrate that the Monomial and Bernstein bases also outperform the Chebyshev basis in terms of learning the spectral graph convolutions.

GPR Graph Learning +1

GSN: A Graph Neural Network Inspired by Spring Network

no code implementations31 Jan 2022 Guanyu Cui, Zhewei Wei, Ji-Rong Wen

Inspired by an old-fashioned spring network model, we propose the Graph Spring Network (GSN), a universal GNN model that works for homophilous and heterophilous graphs.

Metric Learning

Pretrained Language Models for Text Generation: A Survey

no code implementations14 Jan 2022 Junyi Li, Tianyi Tang, Wayne Xin Zhao, Jian-Yun Nie, Ji-Rong Wen

We begin with introducing three key aspects of applying PLMs to text generation: 1) how to encode the input into representations preserving input semantics which can be fused into PLMs; 2) how to design an effective PLM to serve as the generation model; and 3) how to effectively optimize PLMs given the reference text and to ensure that the generated texts satisfy special text properties.

Pretrained Language Models Text Generation

Class-aware Sounding Objects Localization via Audiovisual Correspondence

1 code implementation22 Dec 2021 Di Hu, Yake Wei, Rui Qian, Weiyao Lin, Ruihua Song, Ji-Rong Wen

To address this problem, we propose a two-stage step-by-step learning framework to localize and recognize sounding objects in complex audiovisual scenarios using only the correspondence between audio and vision.

object-detection Object Detection +1

Compressed Video Contrastive Learning

no code implementations NeurIPS 2021 Yuqi Huo, Mingyu Ding, Haoyu Lu, Nanyi Fei, Zhiwu Lu, Ji-Rong Wen, Ping Luo

To enhance the representation ability of the motion vectors, hence the effectiveness of our method, we design a cross guidance contrastive learning algorithm based on multi-instance InfoNCE loss, where motion vectors can take supervision signals from RGB frames and vice versa.

Contrastive Learning Representation Learning

PSSL: Self-supervised Learning for Personalized Search with Contrastive Sampling

1 code implementation24 Nov 2021 Yujia Zhou, Zhicheng Dou, Yutao Zhu, Ji-Rong Wen

Personalized search plays a crucial role in improving user search experience owing to its ability to build user profiles based on historical behaviors.

Self-Supervised Learning

Towards artificial general intelligence via a multimodal foundation model

1 code implementation27 Oct 2021 Nanyi Fei, Zhiwu Lu, Yizhao Gao, Guoxing Yang, Yuqi Huo, Jingyuan Wen, Haoyu Lu, Ruihua Song, Xin Gao, Tao Xiang, Hao Sun, Ji-Rong Wen

To overcome this limitation and take a solid step towards artificial general intelligence (AGI), we develop a foundation model pre-trained with huge multimodal data, which can be quickly adapted for various downstream cognitive tasks.

Image Classification Reading Comprehension +2

Log-Polar Space Convolution

no code implementations29 Sep 2021 Bing Su, Ji-Rong Wen

Convolutional neural networks use regular quadrilateral convolution kernels to extract features.

Image Dataset Compression Based on Matrix Product States

no code implementations29 Sep 2021 Ze-Feng Gao, Peiyu Liu, Xiao-Hui Zhang, Xin Zhao, Z. Y. Xie, Zhong-Yi Lu, Ji-Rong Wen

Based on the MPS structure, we propose a new dataset compression method that compresses datasets by filtering long-range correlation information in task-agnostic scenarios and uses dataset distillation to supplement the information in task-specific scenarios.

Partial Information as Full: Reward Imputation with Sketching in Bandits

no code implementations29 Sep 2021 Xiao Zhang, Ninglu Shao, Zihua Si, Jun Xu, Wenhan Wang, Hanjing Su, Ji-Rong Wen

In this paper, we propose an efficient reward imputation approach using sketching in CBB, which completes the unobserved rewards with the imputed rewards approximating the full-information feedbacks.


One Chatbot Per Person: Creating Personalized Chatbots based on Implicit User Profiles

1 code implementation20 Aug 2021 Zhengyi Ma, Zhicheng Dou, Yutao Zhu, Hanxun Zhong, Ji-Rong Wen

Specifically, leveraging the benefits of Transformer on language understanding, we train a personalized language model to construct a general user profile from the user's historical responses.

Chatbot Language Modelling

Pre-training for Ad-hoc Retrieval: Hyperlink is Also You Need

1 code implementation20 Aug 2021 Zhengyi Ma, Zhicheng Dou, Wei Xu, Xinyu Zhang, Hao Jiang, Zhao Cao, Ji-Rong Wen

In this paper, we propose to leverage the large-scale hyperlinks and anchor texts to pre-train the language model for ad-hoc retrieval.

Language Modelling

Learning Implicit User Profiles for Personalized Retrieval-Based Chatbot

1 code implementation18 Aug 2021 Hongjin Qian, Zhicheng Dou, Yutao Zhu, Yueyuan Ma, Ji-Rong Wen

To learn a user's personalized language style, we elaborately build language models from shallow to deep using the user's historical responses; To model a user's personalized preferences, we explore the conditional relations underneath each post-response pair of the user.


Modeling Relevance Ranking under the Pre-training and Fine-tuning Paradigm

no code implementations12 Aug 2021 Lin Bo, Liang Pang, Gang Wang, Jun Xu, Xiuqiang He, Ji-Rong Wen

Experimental results base on three publicly available benchmarks showed that in both of the implementations, Pre-Rank can respectively outperform the underlying ranking models and achieved state-of-the-art performances.

Document Ranking Information Retrieval +2

Self-supervised Audiovisual Representation Learning for Remote Sensing Data

1 code implementation2 Aug 2021 Konrad Heidler, Lichao Mou, Di Hu, Pu Jin, Guangyao Li, Chuang Gan, Ji-Rong Wen, Xiao Xiang Zhu

By fine-tuning the models on a number of commonly used remote sensing datasets, we show that our approach outperforms existing pre-training strategies for remote sensing imagery.

Representation Learning Transfer Learning

A Pre-training Strategy for Zero-Resource Response Selection in Knowledge-Grounded Conversations

no code implementations ACL 2021 Chongyang Tao, Changyu Chen, Jiazhan Feng, Ji-Rong Wen, Rui Yan

Recently, many studies are emerging towards building a retrieval-based dialogue system that is able to effectively leverage background knowledge (e. g., documents) when conversing with humans.

Language Modelling

Log-Polar Space Convolution for Convolutional Neural Networks

1 code implementation26 Jul 2021 Bing Su, Ji-Rong Wen

Convolutional neural networks use regular quadrilateral convolution kernels to extract features.

Curriculum Pre-Training Heterogeneous Subgraph Transformer for Top-$N$ Recommendation

no code implementations12 Jun 2021 Hui Wang, Kun Zhou, Wayne Xin Zhao, Jingyuan Wang, Ji-Rong Wen

Due to the flexibility in modelling data heterogeneity, heterogeneous information network (HIN) has been adopted to characterize complex and heterogeneous auxiliary data in top-$N$ recommender systems, called \emph{HIN-based recommendation}.

Recommendation Systems

A Joint Model for Dropped Pronoun Recovery and Conversational Discourse Parsing in Chinese Conversational Speech

1 code implementation ACL 2021 Jingxuan Yang, Kerui Xu, Jun Xu, Si Li, Sheng Gao, Jun Guo, Nianwen Xue, Ji-Rong Wen

A second (multi-relational) GCN is then applied to the utterance states to produce a discourse relation-augmented representation for the utterances, which are then fused together with token states in each utterance as input to a dropped pronoun recovery layer.

Discourse Parsing

Enabling Lightweight Fine-tuning for Pre-trained Language Model Compression based on Matrix Product Operators

1 code implementation ACL 2021 Peiyu Liu, Ze-Feng Gao, Wayne Xin Zhao, Z. Y. Xie, Zhong-Yi Lu, Ji-Rong Wen

This paper presents a novel pre-trained language models (PLM) compression approach based on the matrix product operator (short as MPO) from quantum many-body physics.

Language Modelling Model Compression

Pretrained Language Models for Text Generation: A Survey

no code implementations21 May 2021 Junyi Li, Tianyi Tang, Wayne Xin Zhao, Ji-Rong Wen

In this paper, we present an overview of the major advances achieved in the topic of PLMs for text generation.

Natural Language Processing Pretrained Language Models +1

Knowledge-based Review Generation by Coherence Enhanced Text Planning

no code implementations9 May 2021 Junyi Li, Wayne Xin Zhao, Zhicheng Wei, Nicholas Jing Yuan, Ji-Rong Wen

For global coherence, we design a hierarchical self-attentive architecture with both subgraph- and node-level attention to enhance the correlations between subgraphs.

Informativeness Knowledge Graphs +2

Improving Multi-hop Knowledge Base Question Answering by Learning Intermediate Supervision Signals

1 code implementation11 Jan 2021 Gaole He, Yunshi Lan, Jing Jiang, Wayne Xin Zhao, Ji-Rong Wen

In our approach, the student network aims to find the correct answer to the query, while the teacher network tries to learn intermediate supervision signals for improving the reasoning capacity of the student network.

Knowledge Base Question Answering Semantic Parsing

Self-Supervised Video Representation Learning with Constrained Spatiotemporal Jigsaw

no code implementations1 Jan 2021 Yuqi Huo, Mingyu Ding, Haoyu Lu, Zhiwu Lu, Tao Xiang, Ji-Rong Wen, Ziyuan Huang, Jianwen Jiang, Shiwei Zhang, Mingqian Tang, Songfang Huang, Ping Luo

With the constrained jigsaw puzzles, instead of solving them directly, which could still be extremely hard, we carefully design four surrogate tasks that are more solvable but meanwhile still ensure that the learned representation is sensitive to spatiotemporal continuity at both the local and global levels.

Representation Learning

RecBole: Towards a Unified, Comprehensive and Efficient Framework for Recommendation Algorithms

1 code implementation3 Nov 2020 Wayne Xin Zhao, Shanlei Mu, Yupeng Hou, Zihan Lin, Yushuo Chen, Xingyu Pan, Kaiyuan Li, Yujie Lu, Hui Wang, Changxin Tian, Yingqian Min, Zhichao Feng, Xinyan Fan, Xu Chen, Pengfei Wang, Wendi Ji, Yaliang Li, Xiaoling Wang, Ji-Rong Wen

In this library, we implement 73 recommendation models on 28 benchmark datasets, covering the categories of general recommendation, sequential recommendation, context-aware recommendation and knowledge-based recommendation.

Collaborative Filtering Sequential Recommendation

Scalable Graph Neural Networks via Bidirectional Propagation

1 code implementation NeurIPS 2020 Ming Chen, Zhewei Wei, Bolin Ding, Yaliang Li, Ye Yuan, Xiaoyong Du, Ji-Rong Wen

Most notably, GBP can deliver superior performance on a graph with over 60 million nodes and 1. 8 billion edges in less than half an hour on a single machine.

Graph Sampling

Transformer-GCRF: Recovering Chinese Dropped Pronouns with General Conditional Random Fields

1 code implementation Findings of the Association for Computational Linguistics 2020 Jingxuan Yang, Kerui Xu, Jun Xu, Si Li, Sheng Gao, Jun Guo, Ji-Rong Wen, Nianwen Xue

Exploratory analysis also demonstrates that the GCRF did help to capture the dependencies between pronouns in neighboring utterances, thus contributes to the performance improvements.

Machine Translation Translation

Pchatbot: A Large-Scale Dataset for Personalized Chatbot

2 code implementations28 Sep 2020 Hongjin Qian, Xiaohe Li, Hanxun Zhong, Yu Guo, Yueyuan Ma, Yutao Zhu, Zhanliang Liu, Zhicheng Dou, Ji-Rong Wen

This enables the development of personalized dialogue models that directly learn implicit user personality from the user's dialogue history.


Learning to Match Jobs with Resumes from Sparse Interaction Data using Multi-View Co-Teaching Network

no code implementations25 Sep 2020 Shuqing Bian, Xu Chen, Wayne Xin Zhao, Kun Zhou, Yupeng Hou, Yang song, Tao Zhang, Ji-Rong Wen

Compared with pure text-based matching models, the proposed approach is able to learn better data representations from limited or even sparse interaction data, which is more resistible to noise in training data.

Text Matching

Leveraging Historical Interaction Data for Improving Conversational Recommender System

no code implementations19 Aug 2020 Kun Zhou, Wayne Xin Zhao, Hui Wang, Sirui Wang, Fuzheng Zhang, Zhongyuan Wang, Ji-Rong Wen

Most of the existing CRS methods focus on learning effective preference representations for users from conversation data alone.

Recommendation Systems

S^3-Rec: Self-Supervised Learning for Sequential Recommendation with Mutual Information Maximization

2 code implementations18 Aug 2020 Kun Zhou, Hui Wang, Wayne Xin Zhao, Yutao Zhu, Sirui Wang, Fuzheng Zhang, Zhongyuan Wang, Ji-Rong Wen

To tackle this problem, we propose the model S^3-Rec, which stands for Self-Supervised learning for Sequential Recommendation, based on the self-attentive neural architecture.

Self-Supervised Learning Sequential Recommendation

Variance Reduction for Deep Q-Learning using Stochastic Recursive Gradient

no code implementations25 Jul 2020 Haonan Jia, Xiao Zhang, Jun Xu, Wei Zeng, Hao Jiang, Xiaohui Yan, Ji-Rong Wen

Deep Q-learning algorithms often suffer from poor gradient estimations with an excessive variance, resulting in unstable training and poor sampling efficiency.

Q-Learning reinforcement-learning

Counterfactual VQA: A Cause-Effect Look at Language Bias

1 code implementation CVPR 2021 Yulei Niu, Kaihua Tang, Hanwang Zhang, Zhiwu Lu, Xian-Sheng Hua, Ji-Rong Wen

VQA models may tend to rely on language bias as a shortcut and thus fail to sufficiently learn the multi-modal knowledge from both vision and language.

Counterfactual Inference Question Answering +1

Domain-Adaptive Few-Shot Learning

1 code implementation19 Mar 2020 An Zhao, Mingyu Ding, Zhiwu Lu, Tao Xiang, Yulei Niu, Jiechao Guan, Ji-Rong Wen, Ping Luo

Existing few-shot learning (FSL) methods make the implicit assumption that the few target class samples are from the same domain as the source class samples.

Domain Adaptation Few-Shot Learning

AdarGCN: Adaptive Aggregation GCN for Few-Shot Learning

no code implementations28 Feb 2020 Jianhong Zhang, Manli Zhang, Zhiwu Lu, Tao Xiang, Ji-Rong Wen

To address this problem, we propose a graph convolutional network (GCN)-based label denoising (LDN) method to remove the irrelevant images.

Denoising Few-Shot Learning +1

Improving Multi-Turn Response Selection Models with Complementary Last-Utterance Selection by Instance Weighting

no code implementations18 Feb 2020 Kun Zhou, Wayne Xin Zhao, Yutao Zhu, Ji-Rong Wen, Jingsong Yu

Open-domain retrieval-based dialogue systems require a considerable amount of training data to learn their parameters.

Meta-Learning across Meta-Tasks for Few-Shot Learning

no code implementations11 Feb 2020 Nanyi Fei, Zhiwu Lu, Yizhao Gao, Jia Tian, Tao Xiang, Ji-Rong Wen

In this paper, we argue that the inter-meta-task relationships should be exploited and those tasks are sampled strategically to assist in meta-learning.

Domain Adaptation Few-Shot Learning +1

Few-Shot Learning as Domain Adaptation: Algorithm and Analysis

no code implementations6 Feb 2020 Jiechao Guan, Zhiwu Lu, Tao Xiang, Ji-Rong Wen

Specifically, armed with a set transformer based attention module, we construct each episode with two sub-episodes without class overlap on the seen classes to simulate the domain shift between the seen and unseen classes.

Domain Adaptation Few-Shot Image Classification

SetRank: Learning a Permutation-Invariant Ranking Model for Information Retrieval

2 code implementations12 Dec 2019 Liang Pang, Jun Xu, Qingyao Ai, Yanyan Lan, Xue-Qi Cheng, Ji-Rong Wen

In learning-to-rank for information retrieval, a ranking model is automatically learned from the data and then utilized to rank the sets of retrieved documents.

Information Retrieval Learning-To-Rank

Domain Adaptation for Person-Job Fit with Transferable Deep Global Match Network

no code implementations IJCNLP 2019 Shuqing Bian, Wayne Xin Zhao, Yang song, Tao Zhang, Ji-Rong Wen

Furthermore, we extend the match network and implement domain adaptation in three levels, sentence-level representation, sentence-level match, and global match.

Domain Adaptation

Mobile Video Action Recognition

no code implementations27 Aug 2019 Yuqi Huo, Xiaoli Xu, Yao Lu, Yulei Niu, Zhiwu Lu, Ji-Rong Wen

In addition to motion vectors, we also provide a temporal fusion method to explicitly induce the temporal context.

Action Recognition Computer Vision

Personalizing Search Results Using Hierarchical RNN with Query-aware Attention

no code implementations20 Aug 2019 Songwei Ge, Zhicheng Dou, Zhengbao Jiang, Jian-Yun Nie, Ji-Rong Wen

Our analysis reveals that the attention model is able to attribute higher weights to more related past sessions after fine training.

Unsupervised Adversarial Attacks on Deep Feature-based Retrieval with GAN

no code implementations12 Jul 2019 Guoping Zhao, Mingyu Zhang, Jiajun Liu, Ji-Rong Wen

Such tendency indicates that the model indeed learned how to toy with both image retrieval systems and human eyes.

Image Classification Image Retrieval

Generating Long and Informative Reviews with Aspect-Aware Coarse-to-Fine Decoding

1 code implementation ACL 2019 Junyi Li, Wayne Xin Zhao, Ji-Rong Wen, Yang song

In this paper, we propose a novel review generation model by characterizing an elaborately designed aspect-aware coarse-to-fine generation process.

Review Generation Text Generation

A Long-Short Demands-Aware Model for Next-Item Recommendation

no code implementations12 Feb 2019 Ting Bai, Pan Du, Wayne Xin Zhao, Ji-Rong Wen, Jian-Yun Nie

Recommending the right products is the central problem in recommender systems, but the right products should also be recommended at the right time to meet the demands of users, so as to maximize their values.

Recommendation Systems

Zero-Shot Learning with Sparse Attribute Propagation

no code implementations11 Dec 2018 Nanyi Fei, Jiechao Guan, Zhiwu Lu, Tao Xiang, Ji-Rong Wen

The standard approach to ZSL requires a set of training images annotated with seen class labels and a semantic descriptor for seen/unseen classes (attribute vector is the most widely used).

Image Retrieval Zero-Shot Learning

Recursive Visual Attention in Visual Dialog

1 code implementation CVPR 2019 Yulei Niu, Hanwang Zhang, Manli Zhang, Jianhong Zhang, Zhiwu Lu, Ji-Rong Wen

Visual dialog is a challenging vision-language task, which requires the agent to answer multi-round questions about an image.

Question Answering Visual Dialog +1

Zero and Few Shot Learning with Semantic Feature Synthesis and Competitive Learning

no code implementations19 Oct 2018 Zhiwu Lu, Jiechao Guan, Aoxue Li, Tao Xiang, An Zhao, Ji-Rong Wen

Specifically, we assume that each synthesised data point can belong to any unseen class; and the most likely two class candidates are exploited to learn a robust projection function in a competitive fashion.

Few-Shot Learning Zero-Shot Learning

Transferrable Feature and Projection Learning with Class Hierarchy for Zero-Shot Learning

no code implementations19 Oct 2018 Aoxue Li, Zhiwu Lu, Jiechao Guan, Tao Xiang, Li-Wei Wang, Ji-Rong Wen

Inspired by the fact that an unseen class is not exactly `unseen' if it belongs to the same superclass as a seen class, we propose a novel inductive ZSL model that leverages superclasses as the bridge between seen and unseen classes to narrow the domain gap.

Few-Shot Learning Zero-Shot Learning

KB4Rec: A Dataset for Linking Knowledge Bases with Recommender Systems

1 code implementation30 Jul 2018 Wayne Xin Zhao, Gaole He, Hongjian Dou, Jin Huang, Siqi Ouyang, Ji-Rong Wen

Based on our linked dataset, we first preform some interesting qualitative analysis experiments, in which we discuss the effect of two important factors (i. e. popularity and recency) on whether a RS item can be linked to a KB entity.

Knowledge-Aware Recommendation

RUM: network Representation learning throUgh Multi-level structural information preservation

no code implementations8 Oct 2017 Yanlei Yu, Zhiwu Lu, Jiajun Liu, Guoping Zhao, Ji-Rong Wen, Kai Zheng

We propose a novel network representations learning model framework called RUM (network Representation learning throUgh Multi-level structural information preservation).

Representation Learning

Multi-Modal Multi-Scale Deep Learning for Large-Scale Image Annotation

no code implementations5 Sep 2017 Yulei Niu, Zhiwu Lu, Ji-Rong Wen, Tao Xiang, Shih-Fu Chang

In this paper, we address two main issues in large-scale image annotation: 1) how to learn a rich feature representation suitable for predicting a diverse set of visual concepts ranging from object, scene to abstract concept; 2) how to annotate an image with the optimal number of class labels.

Zero-Shot Fine-Grained Classification by Deep Feature Learning with Semantics

no code implementations4 Jul 2017 Aoxue Li, Zhiwu Lu, Li-Wei Wang, Tao Xiang, Xinqi Li, Ji-Rong Wen

In this paper, to address the two issues, we propose a two-phase framework for recognizing images from unseen fine-grained classes, i. e. zero-shot fine-grained classification.

Domain Adaptation Fine-Grained Image Classification +2

Temporal Embedding in Convolutional Neural Networks for Robust Learning of Abstract Snippets

no code implementations18 Feb 2015 Jiajun Liu, Kun Zhao, Brano Kusy, Ji-Rong Wen, Raja Jurdak

The prediction of periodical time-series remains challenging due to various types of data distortions and misalignments.

Time Series

A General SIMD-based Approach to Accelerating Compression Algorithms

1 code implementation6 Feb 2015 Wayne Xin Zhao, Xu-Dong Zhang, Daniel Lemire, Dongdong Shan, Jian-Yun Nie, Hongfei Yan, Ji-Rong Wen

Compression algorithms are important for data oriented tasks, especially in the era of Big Data.

Image classification by visual bag-of-words refinement and reduction

no code implementations18 Jan 2015 Zhiwu Lu, Li-Wei Wang, Ji-Rong Wen

This paper presents a new framework for visual bag-of-words (BOW) refinement and reduction to overcome the drawbacks associated with the visual BOW model which has been widely used for image classification.

Classification General Classification +1

Can Image-Level Labels Replace Pixel-Level Labels for Image Parsing

no code implementations7 Mar 2014 Zhiwu Lu, Zhen-Yong Fu, Tao Xiang, Li-Wei Wang, Ji-Rong Wen

By oversegmenting all the images into regions, we formulate noisily tagged image parsing as a weakly supervised sparse learning problem over all the regions, where the initial labels of each region are inferred from image-level labels.

Sparse Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.