Search Results for author: Peiyan Zhang

Found 13 papers, 5 papers with code

High-Frequency-aware Hierarchical Contrastive Selective Coding for Representation Learning on Text-attributed Graphs

no code implementations26 Feb 2024 Peiyan Zhang, Chaozhuo Li, Liying Kang, Feiran Huang, Senzhang Wang, Xing Xie, Sunghun Kim

Moreover, we show that existing contrastive objective learns the low-frequency component of the augmentation graph and propose a high-frequency component (HFC)-aware contrastive learning objective that makes the learned embeddings more distinctive.

Contrastive Learning Representation Learning

Inductive Graph Alignment Prompt: Bridging the Gap between Graph Pre-training and Inductive Fine-tuning From Spectral Perspective

no code implementations21 Feb 2024 Yuchen Yan, Peiyan Zhang, Zheng Fang, Qingqing Long

Based on the insight of graph pre-training, we propose to bridge the graph signal gap and the graph structure gap with learnable prompts in the spectral space.

General Knowledge Graph Classification

Beyond Pixels: Exploring Human-Readable SVG Generation for Simple Images with Vision Language Models

no code implementations27 Nov 2023 Tong Zhang, Haoyang Liu, Peiyan Zhang, Yuxuan Cheng, Haohan Wang

Our method focuses on producing SVGs that are both accurate and simple, aligning with human readability and understanding.

Vector Graphics

Exploring Recommendation Capabilities of GPT-4V(ision): A Preliminary Case Study

no code implementations7 Nov 2023 Peilin Zhou, Meng Cao, You-Liang Huang, Qichen Ye, Peiyan Zhang, Junling Liu, Yueqi Xie, Yining Hua, Jaeboum Kim

Large Multimodal Models (LMMs) have demonstrated impressive performance across various vision and language tasks, yet their potential applications in recommendation tasks with visual assistance remain unexplored.

General Knowledge Reading Comprehension

Foundation Model-oriented Robustness: Robust Image Model Evaluation with Pretrained Models

no code implementations21 Aug 2023 Peiyan Zhang, Haoyang Liu, Chaozhuo Li, Xing Xie, Sunghun Kim, Haohan Wang

Machine learning has demonstrated remarkable performance over finite datasets, yet whether the scores over the fixed benchmarks can sufficiently indicate the model's performance in the real world is still in discussion.

Image Classification

Continual Learning on Dynamic Graphs via Parameter Isolation

1 code implementation23 May 2023 Peiyan Zhang, Yuchen Yan, Chaozhuo Li, Senzhang Wang, Xing Xie, Guojie Song, Sunghun Kim

Dynamic graph learning methods commonly suffer from the catastrophic forgetting problem, where knowledge learned for previous graphs is overwritten by updates for new graphs.

Continual Learning Graph Learning

A Survey on Incremental Update for Neural Recommender Systems

no code implementations6 Mar 2023 Peiyan Zhang, Sunghun Kim

In this article, we offer a systematic survey of incremental update for neural recommender systems.

Recommendation Systems

Efficiently Leveraging Multi-level User Intent for Session-based Recommendation via Atten-Mixer Network

1 code implementation26 Jun 2022 Peiyan Zhang, Jiayan Guo, Chaozhuo Li, Yueqi Xie, Jaeboum Kim, Yan Zhang, Xing Xie, Haohan Wang, Sunghun Kim

Based on this observation, we intuitively propose to remove the GNN propagation part, while the readout module will take on more responsibility in the model reasoning process.

Session-Based Recommendations

Improving Sequential Recommendations via Bidirectional Temporal Data Augmentation with Pre-training

1 code implementation13 Dec 2021 Juyong Jiang, Peiyan Zhang, Yingtao Luo, Chaozhuo Li, Jaeboum Kim, Kai Zhang, Senzhang Wang, Sunghun Kim

Our approach leverages bidirectional temporal augmentation and knowledge-enhanced fine-tuning to synthesize authentic pseudo-prior items that \emph{retain user preferences and capture deeper item semantic correlations}, thus boosting the model's expressive power.

Data Augmentation Self-Knowledge Distillation +1

Word Shape Matters: Robust Machine Translation with Visual Embedding

no code implementations20 Oct 2020 Haohan Wang, Peiyan Zhang, Eric P. Xing

Neural machine translation has achieved remarkable empirical performance over standard benchmark datasets, yet recent evidence suggests that the models can still fail easily dealing with substandard inputs such as misspelled words, To overcome this issue, we introduce a new encoding heuristic of the input symbols for character-level NLP models: it encodes the shape of each character through the images depicting the letters when printed.

Machine Translation Translation

Cannot find the paper you are looking for? You can Submit a new open access paper.