Search Results for author: Bowen Jin

Found 17 papers, 9 papers with code

Graph Chain-of-Thought: Augmenting Large Language Models by Reasoning on Graphs

1 code implementation10 Apr 2024 Bowen Jin, Chulin Xie, Jiawei Zhang, Kashob Kumar Roy, Yu Zhang, Suhang Wang, Yu Meng, Jiawei Han

Then, we propose a simple and effective framework called Graph Chain-of-thought (Graph-CoT) to augment LLMs with graphs by encouraging LLMs to reason on the graph iteratively.

Towards Unified Multi-Modal Personalization: Large Vision-Language Models for Generative Recommendation and Beyond

no code implementations15 Mar 2024 Tianxin Wei, Bowen Jin, Ruirui Li, Hansi Zeng, Zhengyang Wang, Jianhui Sun, Qingyu Yin, Hanqing Lu, Suhang Wang, Jingrui He, Xianfeng Tang

Developing a universal model that can effectively harness heterogeneous resources and respond to a wide range of personalized needs has been a longstanding community aspiration.

Explanation Generation Image Generation

Improving Retrieval in Theme-specific Applications using a Corpus Topical Taxonomy

1 code implementation7 Mar 2024 SeongKu Kang, Shivam Agarwal, Bowen Jin, Dongha Lee, Hwanjo Yu, Jiawei Han

Document retrieval has greatly benefited from the advancements of large-scale pre-trained language models (PLMs).

Retrieval

RAM-EHR: Retrieval Augmentation Meets Clinical Predictions on Electronic Health Records

no code implementations25 Feb 2024 ran Xu, Wenqi Shi, Yue Yu, Yuchen Zhuang, Bowen Jin, May D. Wang, Joyce C. Ho, Carl Yang

We present RAM-EHR, a Retrieval AugMentation pipeline to improve clinical predictions on Electronic Health Records (EHRs).

Retrieval

Grasping the Essentials: Tailoring Large Language Models for Zero-Shot Relation Extraction

no code implementations17 Feb 2024 Sizhe Zhou, Yu Meng, Bowen Jin, Jiawei Han

(2) We fine-tune a bidirectional Small Language Model (SLM) using these initial seeds to learn the relations for the target domain.

Few-Shot Learning Language Modelling +3

Large Language Models on Graphs: A Comprehensive Survey

1 code implementation5 Dec 2023 Bowen Jin, Gang Liu, Chi Han, Meng Jiang, Heng Ji, Jiawei Han

Besides, although LLMs have shown their pure text-based reasoning ability, it is underexplored whether such ability can be generalized to graphs (i. e., graph-based reasoning).

Language Modelling

Scalable and Effective Generative Information Retrieval

1 code implementation15 Nov 2023 Hansi Zeng, Chen Luo, Bowen Jin, Sheikh Muhammad Sarwar, Tianxin Wei, Hamed Zamani

This paper represents an important milestone in generative retrieval research by showing, for the first time, that generative retrieval models can be trained to perform effectively on large-scale standard retrieval benchmarks.

Information Retrieval Retrieval

"Why Should I Review This Paper?" Unifying Semantic, Topic, and Citation Factors for Paper-Reviewer Matching

no code implementations23 Oct 2023 Yu Zhang, Yanzhen Shen, Xiusi Chen, Bowen Jin, Jiawei Han

As many academic conferences are overwhelmed by a rapidly increasing number of paper submissions, automatically finding appropriate reviewers for each submission becomes a more urgent need than ever.

Information Retrieval Language Modelling +1

Language Models As Semantic Indexers

no code implementations11 Oct 2023 Bowen Jin, Hansi Zeng, Guoyin Wang, Xiusi Chen, Tianxin Wei, Ruirui Li, Zhengyang Wang, Zheng Li, Yang Li, Hanqing Lu, Suhang Wang, Jiawei Han, Xianfeng Tang

Semantic identifier (ID) is an important concept in information retrieval that aims to preserve the semantics of objects such as documents and items inside their IDs.

Contrastive Learning Information Retrieval +2

Learning Multiplex Embeddings on Text-rich Networks with One Text Encoder

no code implementations10 Oct 2023 Bowen Jin, Wentao Zhang, Yu Zhang, Yu Meng, Han Zhao, Jiawei Han

Mainstream text representation learning methods use pretrained language models (PLMs) to generate one embedding for each text unit, expecting that all types of relations between texts can be captured by these single-view embeddings.

Representation Learning

Weakly Supervised Multi-Label Classification of Full-Text Scientific Papers

1 code implementation24 Jun 2023 Yu Zhang, Bowen Jin, Xiusi Chen, Yanzhen Shen, Yunyi Zhang, Yu Meng, Jiawei Han

Instead of relying on human-annotated training samples to build a classifier, weakly supervised scientific paper classification aims to classify papers only using category descriptions (e. g., category names, category-indicative keywords).

Multi-Label Classification

Patton: Language Model Pretraining on Text-Rich Networks

no code implementations20 May 2023 Bowen Jin, Wentao Zhang, Yu Zhang, Yu Meng, Xinyang Zhang, Qi Zhu, Jiawei Han

A real-world text corpus sometimes comprises not only text documents but also semantic links between them (e. g., academic papers in a bibliographic network are linked by citations and co-authorships).

Language Modelling Masked Language Modeling +1

Edgeformers: Graph-Empowered Transformers for Representation Learning on Textual-Edge Networks

1 code implementation21 Feb 2023 Bowen Jin, Yu Zhang, Yu Meng, Jiawei Han

Edges in many real-world social/information networks are associated with rich text information (e. g., user-user communications or user-product reviews).

Edge Classification Link Prediction +1

The Effect of Metadata on Scientific Literature Tagging: A Cross-Field Cross-Model Study

1 code implementation7 Feb 2023 Yu Zhang, Bowen Jin, Qi Zhu, Yu Meng, Jiawei Han

Due to the exponential growth of scientific publications on the Web, there is a pressing need to tag each paper with fine-grained topics so that researchers can track their interested fields of study rather than drowning in the whole literature.

Language Modelling Multi Label Text Classification +3

Heterformer: Transformer-based Deep Node Representation Learning on Heterogeneous Text-Rich Networks

1 code implementation20 May 2022 Bowen Jin, Yu Zhang, Qi Zhu, Jiawei Han

In heterogeneous text-rich networks, this task is more challenging due to (1) presence or absence of text: Some nodes are associated with rich textual information, while others are not; (2) diversity of types: Nodes and edges of multiple types form a heterogeneous network structure.

Clustering Graph Attention +5

Hybrid Encoder: Towards Efficient and Precise Native AdsRecommendation via Hybrid Transformer Encoding Networks

no code implementations22 Apr 2021 Junhan Yang, Zheng Liu, Bowen Jin, Jianxun Lian, Defu Lian, Akshay Soni, Eun Yong Kang, Yajun Wang, Guangzhong Sun, Xing Xie

For the sake of efficient recommendation, conventional methods would generate user and advertisement embeddings independently with a siamese transformer encoder, such that approximate nearest neighbour search (ANN) can be leveraged.

Retrieval

Cannot find the paper you are looking for? You can Submit a new open access paper.