Search Results for author: Ge Yu

Found 23 papers, 11 papers with code

Y Chromosome of Aisin Gioro, the Imperial House of Qing Dynasty

3 code implementations19 Dec 2014 Shi Yan, Harumasa Tachibana, Lan-Hai Wei, Ge Yu, Shao-Qing Wen, Chuan-Chao Wang

We therefore conclude that this haplotype is the Y chromosome of the House of Aisin Gioro.

Populations and Evolution

Cleaner Pretraining Corpus Curation with Neural Web Scraping

1 code implementation22 Feb 2024 Zhipeng Xu, Zhenghao Liu, Yukun Yan, Zhiyuan Liu, Chenyan Xiong, Ge Yu

The web contains large-scale, diverse, and abundant information to satisfy the information-seeking needs of humans.

Language Modelling

Structure-Aware Language Model Pretraining Improves Dense Retrieval on Structured Data

1 code implementation31 May 2023 Xinze Li, Zhenghao Liu, Chenyan Xiong, Shi Yu, Yu Gu, Zhiyuan Liu, Ge Yu

SANTA proposes two pretraining methods to make language models structure-aware and learn effective representations for structured data: 1) Structured Data Alignment, which utilizes the natural alignment relations between structured data and unstructured data for structure-aware pretraining.

Code Search Language Modelling +1

ActiveRAG: Revealing the Treasures of Knowledge via Active Learning

1 code implementation21 Feb 2024 Zhipeng Xu, Zhenghao Liu, Yibin Liu, Chenyan Xiong, Yukun Yan, Shuo Wang, Shi Yu, Zhiyuan Liu, Ge Yu

Retrieval Augmented Generation (RAG) has introduced a new paradigm for Large Language Models (LLMs), aiding in the resolution of knowledge-intensive tasks.

Active Learning Position +2

Universal Vision-Language Dense Retrieval: Learning A Unified Representation Space for Multi-Modal Retrieval

1 code implementation1 Sep 2022 Zhenghao Liu, Chenyan Xiong, Yuanhuiyi Lv, Zhiyuan Liu, Ge Yu

To learn a unified embedding space for multi-modal retrieval, UniVL-DR proposes two techniques: 1) Universal embedding optimization strategy, which contrastively optimizes the embedding space using the modality-balanced hard negatives; 2) Image verbalization method, which bridges the modality gap between images and texts in the raw data space.

Image Retrieval Open-Domain Question Answering +2

Text Matching Improves Sequential Recommendation by Reducing Popularity Biases

1 code implementation27 Aug 2023 Zhenghao Liu, Sen Mei, Chenyan Xiong, Xiaohua LI, Shi Yu, Zhiyuan Liu, Yu Gu, Ge Yu

TASTE alleviates the cold start problem by representing long-tail items using full-text modeling and bringing the benefits of pretrained language models to recommendation systems.

Sequential Recommendation Text Matching

P^3 Ranker: Mitigating the Gaps between Pre-training and Ranking Fine-tuning with Prompt-based Learning and Pre-finetuning

1 code implementation4 May 2022 Xiaomeng Hu, Shi Yu, Chenyan Xiong, Zhenghao Liu, Zhiyuan Liu, Ge Yu

In this paper, we identify and study the two mismatches between pre-training and ranking fine-tuning: the training schema gap regarding the differences in training objectives and model architectures, and the task knowledge gap considering the discrepancy between the knowledge needed in ranking and that learned during pre-training.

INTERVENOR: Prompting the Coding Ability of Large Language Models with the Interactive Chain of Repair

1 code implementation16 Nov 2023 Hanbin Wang, Zhenghao Liu, Shuo Wang, Ganqu Cui, Ning Ding, Zhiyuan Liu, Ge Yu

INTERVENOR prompts Large Language Models (LLMs) to play distinct roles during the code repair process, functioning as both a Code Learner and a Code Teacher.

Code Repair Code Translation

MARVEL: Unlocking the Multi-Modal Capability of Dense Retrieval via Visual Module Plugin

1 code implementation21 Oct 2023 Tianshuo Zhou, Sen Mei, Xinze Li, Zhenghao Liu, Chenyan Xiong, Zhiyuan Liu, Yu Gu, Ge Yu

To facilitate the multi-modal retrieval tasks, we build the ClueWeb22-MM dataset based on the ClueWeb22 dataset, which regards anchor texts as queries, and exacts the related text and image documents from anchor-linked web pages.

Language Modelling Retrieval +1

Say More with Less: Understanding Prompt Learning Behaviors through Gist Compression

1 code implementation25 Feb 2024 Xinze Li, Zhenghao Liu, Chenyan Xiong, Shi Yu, Yukun Yan, Shuo Wang, Ge Yu

It finetunes the compression plugin module and uses the representations of gist tokens to emulate the raw prompts in the vanilla language model.

Language Modelling

When coding meets ranking: A joint framework based on local learning

no code implementations8 Sep 2014 Jim Jing-Yan Wang, Xuefeng Cui, Ge Yu, Lili Guo, Xin Gao

In this paper, we try to answer these questions by developing the first joint sparse coding and ranking score learning algorithm.

Retrieval

Collecting and Analyzing Multidimensional Data with Local Differential Privacy

no code implementations28 Jun 2019 Ning Wang, Xiaokui Xiao, Yin Yang, Jun Zhao, Siu Cheung Hui, Hyejin Shin, Junbum Shin, Ge Yu

Motivated by this, we first propose novel LDP mechanisms for collecting a numeric attribute, whose accuracy is at least no worse (and usually better) than existing solutions in terms of worst-case noise variance.

Attribute

Learning Rate Perturbation: A Generic Plugin of Learning Rate Schedule towards Flatter Local Minima

no code implementations25 Aug 2022 Hengyu Liu, Qiang Fu, Lun Du, Tiancheng Zhang, Ge Yu, Shi Han, Dongmei Zhang

Learning rate is one of the most important hyper-parameters that has a significant influence on neural network training.

A Probabilistic Generative Model for Tracking Multi-Knowledge Concept Mastery Probability

no code implementations17 Feb 2023 Hengyu Liu, Tiancheng Zhang, Fan Li, Minghe Yu, Ge Yu

To better model students' exercise responses, we proposed a logarithmic linear model with three interactive strategies, which models students' exercise responses by considering the relationship among students' knowledge status, knowledge concept, and problems.

Knowledge Tracing

CHGNN: A Semi-Supervised Contrastive Hypergraph Learning Network

no code implementations10 Mar 2023 Yumeng Song, Yu Gu, Tianyi Li, Jianzhong Qi, Zhenghao Liu, Christian S. Jensen, Ge Yu

However, recent studies on hypergraph learning that extend graph convolutional networks to hypergraphs cannot learn effectively from features of unlabeled data.

Contrastive Learning Node Classification

Modeling User Viewing Flow Using Large Language Models for Article Recommendation

no code implementations12 Nov 2023 Zhenghao Liu, Zulong Chen, Moufeng Zhang, Shaoyang Duan, Hong Wen, Liangyue Li, Nan Li, Yu Gu, Ge Yu

This paper proposes the User Viewing Flow Modeling (SINGLE) method for the article recommendation task, which models the user constant preference and instant interest from user-clicked articles.

Comprehensive Evaluation of GNN Training Systems: A Data Management Perspective

no code implementations22 Nov 2023 Hao Yuan, Yajiong Liu, Yanfeng Zhang, Xin Ai, Qiange Wang, Chaoyi Chen, Yu Gu, Ge Yu

Many Graph Neural Network (GNN) training systems have emerged recently to support efficient GNN training.

Management

NeutronOrch: Rethinking Sample-based GNN Training under CPU-GPU Heterogeneous Environments

no code implementations22 Nov 2023 Xin Ai, Qiange Wang, Chunyu Cao, Yanfeng Zhang, Chaoyi Chen, Hao Yuan, Yu Gu, Ge Yu

After extensive experiments and analysis, we find that existing task orchestrating methods fail to fully utilize the heterogeneous resources, limited by inefficient CPU processing or GPU resource contention.

NeutronStream: A Dynamic GNN Training Framework with Sliding Window for Graph Streams

no code implementations5 Dec 2023 Chaoyi Chen, Dechao Gao, Yanfeng Zhang, Qiange Wang, Zhenbo Fu, Xuecang Zhang, Junhua Zhu, Yu Gu, Ge Yu

Though many dynamic GNN models have emerged to learn from evolving graphs, the training process of these dynamic GNNs is dramatically different from traditional GNNs in that it captures both the spatial and temporal dependencies of graph updates.

LR-CNN: Lightweight Row-centric Convolutional Neural Network Training for Memory Reduction

no code implementations21 Jan 2024 Zhigang Wang, Hangyu Yang, Ning Wang, Chuanfei Xu, Jie Nie, Zhiqiang Wei, Yu Gu, Ge Yu

However, training its complex network is very space-consuming, since a lot of intermediate data are preserved across layers, especially when processing high-dimension inputs with a big batch size.

LegalDuet: Learning Effective Representations for Legal Judgment Prediction through a Dual-View Legal Clue Reasoning

no code implementations27 Jan 2024 Pengjie Liu, Zhenghao Liu, Xiaoyuan Yi, Liner Yang, Shuo Wang, Yu Gu, Ge Yu, Xing Xie, Shuang-Hua Yang

It proposes a dual-view legal clue reasoning mechanism, which derives from two reasoning chains of judges: 1) Law Case Reasoning, which makes legal judgments according to the judgment experiences learned from analogy/confusing legal cases; 2) Legal Ground Reasoning, which lies in matching the legal clues between criminal cases and legal decisions.

Cannot find the paper you are looking for? You can Submit a new open access paper.