Search Results for author: Liyi Chen

Found 15 papers, 11 papers with code

Weakly Supervised Semantic Segmentation with Boundary Exploration

1 code implementation ECCV 2020 Liyi Chen, Weiwei Wu, Chenchen Fu, Xiao Han, Yuntao Zhang

Weakly supervised semantic segmentation with image-level labels has attracted a lot of attention recently because these labels are already available in most datasets.

Segmentation Weakly supervised Semantic Segmentation +1

SimCMF: A Simple Cross-modal Fine-tuning Strategy from Vision Foundation Models to Any Imaging Modality

1 code implementation27 Nov 2024 Chenyang Lei, Liyi Chen, Jun Cen, Xiao Chen, Zhen Lei, Felix Heide, Qifeng Chen, Zhaoxiang Zhang

To this end, this work presents a simple and effective framework, SimCMF, to study an important problem: cross-modal fine-tuning from vision foundation models trained on natural RGB images to other imaging modalities of different physical properties (e. g., polarization).

cross-modal alignment

TokenSelect: Efficient Long-Context Inference and Length Extrapolation for LLMs via Dynamic Token-Level KV Cache Selection

no code implementations5 Nov 2024 Wei Wu, Zhuoshi Pan, Chao Wang, Liyi Chen, Yunchu Bai, Kun fu, Zheng Wang, Hui Xiong

With the development of large language models (LLMs), the ability to handle longer contexts has become a key capability for Web applications such as cross-document understanding and LLM-powered search systems.

document understanding

Plan-on-Graph: Self-Correcting Adaptive Planning of Large Language Model on Knowledge Graphs

2 code implementations31 Oct 2024 Liyi Chen, Panrong Tong, Zhongming Jin, Ying Sun, Jieping Ye, Hui Xiong

To address these limitations, we propose a novel self-correcting adaptive planning paradigm for KG-augmented LLM named Plan-on-Graph (PoG), which first decomposes the question into several sub-objectives and then repeats the process of adaptively exploring reasoning paths, updating memory, and reflecting on the need to self-correct erroneous reasoning paths until arriving at the answer.

Knowledge Graphs Language Modelling

Structure-Enhanced Protein Instruction Tuning: Towards General-Purpose Protein Understanding

no code implementations4 Oct 2024 Wei Wu, Chao Wang, Liyi Chen, Mingze Yin, Yiheng Zhu, Kun fu, Jieping Ye, Hui Xiong, Zheng Wang

Recent development of protein language models (pLMs) with supervised fine tuning provides a promising solution to this problem.

SimMAT: Exploring Transferability from Vision Foundation Models to Any Image Modality

1 code implementation12 Sep 2024 Chenyang Lei, Liyi Chen, Jun Cen, Xiao Chen, Zhen Lei, Felix Heide, Ziwei Liu, Qifeng Chen, Zhaoxiang Zhang

To this end, this work presents a simple and effective framework SimMAT to study an open problem: the transferability from vision foundation models trained on natural RGB images to other image modalities of different physical properties (e. g., polarization).

Transfer Learning

AFDGCF: Adaptive Feature De-correlation Graph Collaborative Filtering for Recommendations

1 code implementation26 Mar 2024 Wei Wu, Chao Wang, Dazhong Shen, Chuan Qin, Liyi Chen, Hui Xiong

Collaborative filtering methods based on graph neural networks (GNNs) have witnessed significant success in recommender systems (RS), capitalizing on their ability to capture collaborative signals within intricate user-item relationships via message-passing mechanisms.

Collaborative Filtering Recommendation Systems

McQueen: a Benchmark for Multimodal Conversational Query Rewrite

1 code implementation23 Oct 2022 Yifei Yuan, Chen Shi, Runze Wang, Liyi Chen, Feijun Jiang, Yuan You, Wai Lam

In this paper, we propose the task of multimodal conversational query rewrite (McQR), which performs query rewrite under the multimodal visual conversation setting.

Multi-modal Siamese Network for Entity Alignment

1 code implementation KDD 2022 Liyi Chen, Zhi Li, Tong Xu, Han Wu, Zhefeng Wang, Nicholas Jing Yuan, Enhong Chen

To deal with that problem, in this paper, we propose a novel Multi-modal Siamese Network for Entity Alignment (MSNEA) to align entities in different MMKGs, in which multi-modal knowledge could be comprehensively leveraged by the exploitation of inter-modal effect.

Ranked #7 on Multi-modal Entity Alignment on UMVM-oea-d-w-v1 (using extra training data)

Attribute Contrastive Learning +3

MMEA: Entity Alignment for Multi-Modal Knowledge Graphs

1 code implementation20 Aug 2020 Liyi Chen, Zhi Li, Yijun Wang, Tong Xu, Zhefeng Wang, Enhong Chen

To that end, in this paper, we propose a novel solution called Multi-Modal Entity Alignment (MMEA) to address the problem of entity alignment in a multi-modal view.

Knowledge Graphs Multimodal Deep Learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.