Search Results for author: Ruobing Xie

Found 70 papers, 39 papers with code

Enhanced Generative Recommendation via Content and Collaboration Integration

no code implementations27 Mar 2024 Yidan Wang, Zhaochun Ren, Weiwei Sun, Jiyuan Yang, Zhixiang Liang, Xin Chen, Ruobing Xie, Su Yan, Xu Zhang, Pengjie Ren, Zhumin Chen, Xin Xin

However, existing generative recommendation approaches still encounter challenges in (i) effectively integrating user-item collaborative signals and item content information within a unified generative framework, and (ii) executing an efficient alignment between content information and collaborative signals.

Collaborative Filtering Language Modelling +1

PhD: A Prompted Visual Hallucination Evaluation Dataset

no code implementations17 Mar 2024 Jiazhen Liu, Yuhan Fu, Ruobing Xie, Runquan Xie, Xingwu Sun, Fengzong Lian, Zhanhui Kang, Xirong Li

The rapid growth of Large Language Models (LLMs) has driven the development of Large Vision-Language Models (LVLMs).

Attribute Common Sense Reasoning +2

Mastering Text, Code and Math Simultaneously via Fusing Highly Specialized Language Models

no code implementations13 Mar 2024 Ning Ding, Yulin Chen, Ganqu Cui, Xingtai Lv, Weilin Zhao, Ruobing Xie, BoWen Zhou, Zhiyuan Liu, Maosong Sun

Underlying data distributions of natural language, programming code, and mathematical symbols vary vastly, presenting a complex challenge for large language models (LLMs) that strive to achieve high performance across all three domains simultaneously.

Math

Controllable Preference Optimization: Toward Controllable Multi-Objective Alignment

no code implementations29 Feb 2024 Yiju Guo, Ganqu Cui, Lifan Yuan, Ning Ding, Jiexin Wang, Huimin Chen, Bowen Sun, Ruobing Xie, Jie zhou, Yankai Lin, Zhiyuan Liu, Maosong Sun

In practice, the multifaceted nature of human preferences inadvertently introduces what is known as the "alignment tax" -a compromise where enhancements in alignment within one objective (e. g., harmlessness) can diminish performance in others (e. g., helpfulness).

Navigate

Beyond Natural Language: LLMs Leveraging Alternative Formats for Enhanced Reasoning and Communication

1 code implementation28 Feb 2024 Weize Chen, Chenfei Yuan, Jiarui Yuan, Yusheng Su, Chen Qian, Cheng Yang, Ruobing Xie, Zhiyuan Liu, Maosong Sun

Natural language (NL) has long been the predominant format for human cognition and communication, and by extension, has been similarly pivotal in the development and application of Large Language Models (LLMs).

Plug-in Diffusion Model for Sequential Recommendation

1 code implementation5 Jan 2024 Haokai Ma, Ruobing Xie, Lei Meng, Xin Chen, Xu Zhang, Leyu Lin, Zhanhui Kang

To address this issue, this paper presents a novel Plug-in Diffusion Model for Recommendation (PDRec) framework, which employs the diffusion model as a flexible plugin to jointly take full advantage of the diffusion-generating user preferences on all items.

Image Generation Model Optimization +1

MAVEN-Arg: Completing the Puzzle of All-in-One Event Understanding Dataset with Event Argument Annotation

1 code implementation15 Nov 2023 Xiaozhi Wang, Hao Peng, Yong Guan, Kaisheng Zeng, Jianhui Chen, Lei Hou, Xu Han, Yankai Lin, Zhiyuan Liu, Ruobing Xie, Jie zhou, Juanzi Li

Understanding events in texts is a core objective of natural language understanding, which requires detecting event occurrences, extracting event arguments, and analyzing inter-event relationships.

Event Argument Extraction Event Detection +3

Universal Multi-modal Multi-domain Pre-trained Recommendation

no code implementations3 Nov 2023 Wenqi Sun, Ruobing Xie, Shuqing Bian, Wayne Xin Zhao, Jie zhou

There is a rapidly-growing research interest in modeling user preferences via pre-training multi-domain interactions for recommender systems.

Recommendation Systems

Variator: Accelerating Pre-trained Models with Plug-and-Play Compression Modules

1 code implementation24 Oct 2023 Chaojun Xiao, Yuqi Luo, Wenbin Zhang, Pengle Zhang, Xu Han, Yankai Lin, Zhengyan Zhang, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie zhou

Pre-trained language models (PLMs) have achieved remarkable results on NLP tasks but at the expense of huge parameter sizes and the consequent computational costs.

Computational Efficiency

Thoroughly Modeling Multi-domain Pre-trained Recommendation as Language

no code implementations20 Oct 2023 Zekai Qu, Ruobing Xie, Chaojun Xiao, Yuan YAO, Zhiyuan Liu, Fengzong Lian, Zhanhui Kang, Jie zhou

With the thriving of pre-trained language model (PLM) widely verified in various of NLP tasks, pioneer efforts attempt to explore the possible cooperation of the general textual information in PLM with the personalized behavioral information in user historical behavior sequences to enhance sequential recommendation (SR).

Informativeness Language Modelling +1

Boosting Inference Efficiency: Unleashing the Power of Parameter-Shared Pre-trained Language Models

no code implementations19 Oct 2023 Weize Chen, Xiaoyue Xu, Xu Han, Yankai Lin, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie zhou

Parameter-shared pre-trained language models (PLMs) have emerged as a successful approach in resource-constrained environments, enabling substantial reductions in model storage and memory costs without significant performance compromise.

AgentCF: Collaborative Learning with Autonomous Language Agents for Recommender Systems

no code implementations13 Oct 2023 Junjie Zhang, Yupeng Hou, Ruobing Xie, Wenqi Sun, Julian McAuley, Wayne Xin Zhao, Leyu Lin, Ji-Rong Wen

The optimized agents can also propagate their preferences to other agents in subsequent interactions, implicitly capturing the collaborative filtering idea.

Collaborative Filtering Decision Making +2

AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors

1 code implementation21 Aug 2023 Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie zhou

Autonomous agents empowered by Large Language Models (LLMs) have undergone significant improvements, enabling them to generalize across a broad spectrum of tasks.

Learning from All Sides: Diversified Positive Augmentation via Self-distillation in Recommendation

no code implementations15 Aug 2023 Chong Liu, Xiaoyang Liu, Ruobing Xie, Lixin Zhang, Feng Xia, Leyu Lin

A powerful positive item augmentation is beneficial to address the sparsity issue, while few works could jointly consider both the accuracy and diversity of these augmented training labels.

Recommendation Systems Retrieval

Emergent Modularity in Pre-trained Transformers

1 code implementation28 May 2023 Zhengyan Zhang, Zhiyuan Zeng, Yankai Lin, Chaojun Xiao, Xiaozhi Wang, Xu Han, Zhiyuan Liu, Ruobing Xie, Maosong Sun, Jie zhou

In analogy to human brains, we consider two main characteristics of modularity: (1) functional specialization of neurons: we evaluate whether each neuron is mainly specialized in a certain function, and find that the answer is yes.

Recyclable Tuning for Continual Pre-training

1 code implementation15 May 2023 Yujia Qin, Cheng Qian, Xu Han, Yankai Lin, Huadong Wang, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie zhou

In pilot studies, we find that after continual pre-training, the upgraded PLM remains compatible with the outdated adapted weights to some extent.

Large Language Models are Zero-Shot Rankers for Recommender Systems

1 code implementation15 May 2023 Yupeng Hou, Junjie Zhang, Zihan Lin, Hongyu Lu, Ruobing Xie, Julian McAuley, Wayne Xin Zhao

Recently, large language models (LLMs) (e. g., GPT-4) have demonstrated impressive general-purpose task-solving abilities, including the potential to approach recommendation tasks.

Recommendation Systems

Recommendation as Instruction Following: A Large Language Model Empowered Recommendation Approach

no code implementations11 May 2023 Junjie Zhang, Ruobing Xie, Yupeng Hou, Wayne Xin Zhao, Leyu Lin, Ji-Rong Wen

Inspired by the recent progress on large language models (LLMs), we take a different approach to developing the recommendation models, considering recommendation as instruction following by LLMs.

Instruction Following Language Modelling +2

Attacking Pre-trained Recommendation

1 code implementation6 May 2023 Yiqing Wu, Ruobing Xie, Zhao Zhang, Yongchun Zhu, Fuzhen Zhuang, Jie zhou, Yongjun Xu, Qing He

Recently, a series of pioneer studies have shown the potency of pre-trained models in sequential recommendation, illuminating the path of building an omniscient unified pre-trained recommendation model for different downstream recommendation tasks.

Sequential Recommendation

Triple Sequence Learning for Cross-domain Recommendation

no code implementations11 Apr 2023 Haokai Ma, Ruobing Xie, Lei Meng, Xin Chen, Xu Zhang, Leyu Lin, Jie zhou

To address this issue, we present a novel framework, termed triple sequence learning for cross-domain recommendation (Tri-CDR), which jointly models the source, target, and mixed behavior sequences to highlight the global and target preference and precisely model the triple correlation in CDR.

Contrastive Learning

A Survey on Causal Inference for Recommendation

no code implementations21 Mar 2023 Huishi Luo, Fuzhen Zhuang, Ruobing Xie, HengShu Zhu, Deqing Wang

Recently, causal inference has attracted increasing attention from researchers of recommender systems (RS), which analyzes the relationship between a cause and its effect and has a wide range of real-world applications in multiple fields.

Causal Inference counterfactual +2

Adversarial Learning Data Augmentation for Graph Contrastive Learning in Recommendation

1 code implementation5 Feb 2023 JunJie Huang, Qi Cao, Ruobing Xie, Shaoliang Zhang, Feng Xia, HuaWei Shen, Xueqi Cheng

To reduce the influence of data sparsity, Graph Contrastive Learning (GCL) is adopted in GNN-based CF methods for enhancing performance.

Contrastive Learning Data Augmentation

Visually Grounded Commonsense Knowledge Acquisition

1 code implementation22 Nov 2022 Yuan YAO, Tianyu Yu, Ao Zhang, Mengdi Li, Ruobing Xie, Cornelius Weber, Zhiyuan Liu, Hai-Tao Zheng, Stefan Wermter, Tat-Seng Chua, Maosong Sun

In this work, we present CLEVER, which formulates CKE as a distantly supervised multi-instance learning problem, where models learn to summarize commonsense relations from a bag of images about an entity pair without any human annotation on image instances.

Language Modelling

Pruning Pre-trained Language Models Without Fine-Tuning

1 code implementation12 Oct 2022 Ting Jiang, Deqing Wang, Fuzhen Zhuang, Ruobing Xie, Feng Xia

These methods, such as movement pruning, use first-order information to prune PLMs while fine-tuning the remaining weights.

Better Pre-Training by Reducing Representation Confusion

no code implementations9 Oct 2022 Haojie Zhang, Mingfei Liang, Ruobing Xie, Zhenlong Sun, Bo Zhang, Leyu Lin

Motivated by the above investigation, we propose two novel techniques to improve pre-trained language models: Decoupled Directional Relative Position (DDRP) encoding and MTH pre-training objective.

Language Modelling Position +1

Reweighting Clicks with Dwell Time in Recommendation

no code implementations19 Sep 2022 Ruobing Xie, Lin Ma, Shaoliang Zhang, Feng Xia, Leyu Lin

Precisely, we first define a new behavior named valid read, which helps to select high-quality click instances for different users and items via dwell time.

valid

Multi-granularity Item-based Contrastive Recommendation

no code implementations4 Jul 2022 Ruobing Xie, Zhijie Qiu, Bo Zhang, Leyu Lin

Specifically, we build three item-based CL tasks as a set of plug-and-play auxiliary objectives to capture item correlations in feature, semantic and session levels.

Contrastive Learning Recommendation Systems +1

Customized Conversational Recommender Systems

no code implementations30 Jun 2022 Shuokai Li, Yongchun Zhu, Ruobing Xie, Zhenwei Tang, Zhao Zhang, Fuzhen Zhuang, Qing He, Hui Xiong

In this paper, we propose two key points for CRS to improve the user experience: (1) Speaking like a human, human can speak with different styles according to the current dialogue context.

Meta-Learning Recommendation Systems

Prompt Tuning for Discriminative Pre-trained Language Models

1 code implementation Findings (ACL) 2022 Yuan YAO, Bowen Dong, Ao Zhang, Zhengyan Zhang, Ruobing Xie, Zhiyuan Liu, Leyu Lin, Maosong Sun, Jianyong Wang

Recent works have shown promising results of prompt tuning in stimulating pre-trained language models (PLMs) for natural language processing (NLP) tasks.

Language Modelling Question Answering +2

Personalized Prompt for Sequential Recommendation

no code implementations19 May 2022 Yiqing Wu, Ruobing Xie, Yongchun Zhu, Fuzhen Zhuang, Xu Zhang, Leyu Lin, Qing He

Specifically, we build the personalized soft prefix prompt via a prompt generator based on user profiles and enable a sufficient training of prompts via a prompt-oriented contrastive learning with both prompt- and behavior-based augmentations.

Contrastive Learning Sequential Recommendation

Selective Fairness in Recommendation via Prompts

1 code implementation10 May 2022 Yiqing Wu, Ruobing Xie, Yongchun Zhu, Fuzhen Zhuang, Xiang Ao, Xu Zhang, Leyu Lin, Qing He

In this work, we define the selective fairness task, where users can flexibly choose which sensitive attributes should the recommendation model be bias-free.

Attribute Fairness +1

User-Centric Conversational Recommendation with Multi-Aspect User Modeling

1 code implementation20 Apr 2022 Shuokai Li, Ruobing Xie, Yongchun Zhu, Xiang Ao, Fuzhen Zhuang, Qing He

In this work, we highlight that the user's historical dialogue sessions and look-alike users are essential sources of user preferences besides the current dialogue session in CRS.

Dialogue Generation Dialogue Understanding +1

Multi-view Multi-behavior Contrastive Learning in Recommendation

1 code implementation20 Mar 2022 Yiqing Wu, Ruobing Xie, Yongchun Zhu, Xiang Ao, Xin Chen, Xu Zhang, Fuzhen Zhuang, Leyu Lin, Qing He

We argue that MBR models should: (1) model the coarse-grained commonalities between different behaviors of a user, (2) consider both individual sequence view and global graph view in multi-behavior modeling, and (3) capture the fine-grained differences between multiple behaviors of a user.

Contrastive Learning

Contrastive Cross-domain Recommendation in Matching

1 code implementation2 Dec 2021 Ruobing Xie, Qi Liu, Liangdong Wang, Shukai Liu, Bo Zhang, Leyu Lin

Cross-domain recommendation (CDR) aims to provide better recommendation results in the target domain with the help of the source domain, which is widely used and explored in real-world systems.

Contrastive Learning Representation Learning +1

Curriculum Disentangled Recommendation with Noisy Multi-feedback

1 code implementation NeurIPS 2021 Hong Chen, Yudong Chen, Xin Wang, Ruobing Xie, Rui Wang, Feng Xia, Wenwu Zhu

However, learning such disentangled representations from multi-feedback data is challenging because i) multi-feedback is complex: there exist complex relations among different types of feedback (e. g., click, unclick, and dislike, etc) as well as various user intentions, and ii) multi-feedback is noisy: there exists noisy (useless) information both in features and labels, which may deteriorate the recommendation performance.

Denoising Representation Learning

MIC: Model-agnostic Integrated Cross-channel Recommenders

no code implementations22 Oct 2021 Yujie Lu, Ping Nie, Shengyu Zhang, Ming Zhao, Ruobing Xie, William Yang Wang, Yi Ren

However, existing work are primarily built upon pre-defined retrieval channels, including User-CF (U2U), Item-CF (I2I), and Embedding-based Retrieval (U2I), thus access to the limited correlation between users and items which solely entail from partial information of latent interactions.

Recommendation Systems Retrieval +2

Personalized Transfer of User Preferences for Cross-domain Recommendation

1 code implementation21 Oct 2021 Yongchun Zhu, Zhenwei Tang, Yudan Liu, Fuzhen Zhuang, Ruobing Xie, Xu Zhang, Leyu Lin, Qing He

Specifically, a meta network fed with users' characteristic embeddings is learned to generate personalized bridge functions to achieve personalized transfer of preferences for each user.

Recommendation Systems

Open Hierarchical Relation Extraction

1 code implementation NAACL 2021 Kai Zhang, Yuan YAO, Ruobing Xie, Xu Han, Zhiyuan Liu, Fen Lin, Leyu Lin, Maosong Sun

To establish the bidirectional connections between OpenRE and relation hierarchy, we propose the task of open hierarchical relation extraction and present a novel OHRE framework for the task.

Clustering Relation +1

Learning to Expand Audience via Meta Hybrid Experts and Critics for Recommendation and Advertising

3 code implementations31 May 2021 Yongchun Zhu, Yudan Liu, Ruobing Xie, Fuzhen Zhuang, Xiaobo Hao, Kaikai Ge, Xu Zhang, Leyu Lin, Juan Cao

Besides, MetaHeac has been successfully deployed in WeChat for the promotion of both contents and advertisements, leading to great improvement in the quality of marketing.

Marketing Meta-Learning +1

Transfer-Meta Framework for Cross-domain Recommendation to Cold-Start Users

no code implementations11 May 2021 Yongchun Zhu, Kaikai Ge, Fuzhen Zhuang, Ruobing Xie, Dongbo Xi, Xu Zhang, Leyu Lin, Qing He

With the advantage of meta learning which has good generalization ability to novel tasks, we propose a transfer-meta framework for CDR (TMCDR) which has a transfer stage and a meta stage.

Meta-Learning Recommendation Systems

Long Short-Term Temporal Meta-learning in Online Recommendation

no code implementations8 May 2021 Ruobing Xie, Yalong Wang, Rui Wang, Yuanfu Lu, Yuanhang Zou, Feng Xia, Leyu Lin

An effective online recommendation system should jointly capture users' long-term and short-term preferences in both users' internal behaviors (from the target recommendation task) and external behaviors (from other tasks).

Meta-Learning

Understanding WeChat User Preferences and "Wow" Diffusion

1 code implementation4 Mar 2021 Fanjin Zhang, Jie Tang, Xueyi Liu, Zhenyu Hou, Yuxiao Dong, Jing Zhang, Xiao Liu, Ruobing Xie, Kai Zhuang, Xu Zhang, Leyu Lin, Philip S. Yu

"Top Stories" is a novel friend-enhanced recommendation engine in WeChat, in which users can read articles based on preferences of both their own and their friends.

Graph Representation Learning Social and Information Networks

UPRec: User-Aware Pre-training for Recommender Systems

no code implementations22 Feb 2021 Chaojun Xiao, Ruobing Xie, Yuan YAO, Zhiyuan Liu, Maosong Sun, Xu Zhang, Leyu Lin

Existing sequential recommendation methods rely on large amounts of training data and usually suffer from the data sparsity problem.

Self-Supervised Learning Sequential Recommendation

Improving Accuracy and Diversity in Matching of Recommendation with Diversified Preference Network

no code implementations7 Feb 2021 Ruobing Xie, Qi Liu, Shukai Liu, Ziwei Zhang, Peng Cui, Bo Zhang, Leyu Lin

In this paper, we propose a novel Heterogeneous graph neural network framework for diversified recommendation (GraphDR) in matching to improve both recommendation accuracy and diversity.

Graph Attention Recommendation Systems

Denoising Relation Extraction from Document-level Distant Supervision

1 code implementation EMNLP 2020 Chaojun Xiao, Yuan YAO, Ruobing Xie, Xu Han, Zhiyuan Liu, Maosong Sun, Fen Lin, Leyu Lin

Distant supervision (DS) has been widely used to generate auto-labeled data for sentence-level relation extraction (RE), which improves RE performance.

Denoising Document-level Relation Extraction +2

Knowledge Transfer via Pre-training for Recommendation: A Review and Prospect

no code implementations19 Sep 2020 Zheni Zeng, Chaojun Xiao, Yuan YAO, Ruobing Xie, Zhiyuan Liu, Fen Lin, Leyu Lin, Maosong Sun

Recommender systems aim to provide item recommendations for users, and are usually faced with data sparsity problem (e. g., cold start) in real-world scenarios.

Recommendation Systems Transfer Learning

Connecting Embeddings for Knowledge Graph Entity Typing

1 code implementation ACL 2020 Yu Zhao, Anxiang Zhang, Ruobing Xie, Kang Liu, Xiaojie Wang

In this paper, we propose a novel approach for KG entity typing which is trained by jointly utilizing local typing knowledge from existing entity type assertions and global triple knowledge from KGs.

Entity Typing Knowledge Graph Completion +1

FAQ-based Question Answering via Knowledge Anchors

no code implementations14 Nov 2019 Ruobing Xie, Yanan Lu, Fen Lin, Leyu Lin

In this paper, we propose a novel Knowledge Anchor based Question Answering (KAQA) framework for FAQ-based QA to better understand questions and retrieve more appropriate answers.

graph construction Knowledge Graphs +2

Neural Snowball for Few-Shot Relation Learning

1 code implementation29 Aug 2019 Tianyu Gao, Xu Han, Ruobing Xie, Zhiyuan Liu, Fen Lin, Leyu Lin, Maosong Sun

To address new relations with few-shot instances, we propose a novel bootstrapping approach, Neural Snowball, to learn new relations by transferring semantic knowledge about existing relations.

Knowledge Graphs Relation +1

Knowledge Representation Learning: A Quantitative Review

2 code implementations28 Dec 2018 Yankai Lin, Xu Han, Ruobing Xie, Zhiyuan Liu, Maosong Sun

Knowledge representation learning (KRL) aims to represent entities and relations in knowledge graph in low-dimensional semantic space, which have been widely used in massive knowledge-driven tasks.

General Classification Information Retrieval +7

Language Modeling with Sparse Product of Sememe Experts

1 code implementation EMNLP 2018 Yihong Gu, Jun Yan, Hao Zhu, Zhiyuan Liu, Ruobing Xie, Maosong Sun, Fen Lin, Leyu Lin

Most language modeling methods rely on large-scale data to statistically learn the sequential patterns of words.

Language Modelling

Cross-lingual Lexical Sememe Prediction

1 code implementation EMNLP 2018 Fanchao Qi, Yankai Lin, Maosong Sun, Hao Zhu, Ruobing Xie, Zhiyuan Liu

We propose a novel framework to model correlations between sememes and multi-lingual words in low-dimensional semantic space for sememe prediction.

Learning Word Embeddings Multilingual Word Embeddings

Incorporating Chinese Characters of Words for Lexical Sememe Prediction

1 code implementation ACL 2018 Huiming Jin, Hao Zhu, Zhiyuan Liu, Ruobing Xie, Maosong Sun, Fen Lin, Leyu Lin

However, existing methods of lexical sememe prediction typically rely on the external context of words to represent the meaning, which usually fails to deal with low-frequency and out-of-vocabulary words.

Common Sense Reasoning

Improved Word Representation Learning with Sememes

1 code implementation ACL 2017 Yilin Niu, Ruobing Xie, Zhiyuan Liu, Maosong Sun

The key idea is to utilize word sememes to capture exact meanings of a word within specific contexts accurately.

Common Sense Reasoning Language Modelling +6

Does William Shakespeare REALLY Write Hamlet? Knowledge Representation Learning with Confidence

1 code implementation9 May 2017 Ruobing Xie, Zhiyuan Liu, Fen Lin, Leyu Lin

Experimental results demonstrate that our confidence-aware models achieve significant and consistent improvements on all tasks, which confirms the capability of CKRL modeling confidence with structural information in both KG noise detection and knowledge representation learning.

Representation Learning Triple Classification

Neural Emoji Recommendation in Dialogue Systems

no code implementations14 Dec 2016 Ruobing Xie, Zhiyuan Liu, Rui Yan, Maosong Sun

It indicates that our method could well capture the contextual information and emotion flow in dialogues, which is significant for emoji recommendation.

General Classification

Knowledge Representation via Joint Learning of Sequential Text and Knowledge Graphs

no code implementations22 Sep 2016 Jiawei Wu, Ruobing Xie, Zhiyuan Liu, Maosong Sun

There are two main challenges for constructing knowledge representations from plain texts: (1) How to take full advantages of sequential contexts of entities in plain texts for KRL.

Informativeness Knowledge Graphs +4

Image-embodied Knowledge Representation Learning

1 code implementation22 Sep 2016 Ruobing Xie, Zhiyuan Liu, Huanbo Luan, Maosong Sun

More specifically, we first construct representations for all images of an entity with a neural image encoder.

General Classification Representation Learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.