Search Results for author: Hongye Jin

Found 9 papers, 6 papers with code

Harnessing the Power of LLMs in Practice: A Survey on ChatGPT and Beyond

1 code implementation26 Apr 2023 Jingfeng Yang, Hongye Jin, Ruixiang Tang, Xiaotian Han, Qizhang Feng, Haoming Jiang, Bing Yin, Xia Hu

This paper presents a comprehensive and practical guide for practitioners and end-users working with Large Language Models (LLMs) in their downstream natural language processing (NLP) tasks.

Language Modelling Natural Language Understanding +1

LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning

2 code implementations2 Jan 2024 Hongye Jin, Xiaotian Han, Jingfeng Yang, Zhimeng Jiang, Zirui Liu, Chia-Yuan Chang, Huiyuan Chen, Xia Hu

To achieve this goal, we propose SelfExtend to extend the context window of LLMs by constructing bi-level attention information: the grouped attention and the neighbor attention.

Disentangled Graph Collaborative Filtering

2 code implementations3 Jul 2020 Xiang Wang, Hongye Jin, An Zhang, Xiangnan He, Tong Xu, Tat-Seng Chua

Such uniform approach to model user interests easily results in suboptimal representations, failing to model diverse relationships and disentangle user intents in representations.

Collaborative Filtering Disentanglement

Retiring $Δ$DP: New Distribution-Level Metrics for Demographic Parity

1 code implementation31 Jan 2023 Xiaotian Han, Zhimeng Jiang, Hongye Jin, Zirui Liu, Na Zou, Qifan Wang, Xia Hu

Unfortunately, in this paper, we reveal that the fairness metric $\Delta DP$ can not precisely measure the violation of demographic parity, because it inherently has the following drawbacks: i) zero-value $\Delta DP$ does not guarantee zero violation of demographic parity, ii) $\Delta DP$ values can vary with different classification thresholds.

Fairness

Was my Model Stolen? Feature Sharing for Robust and Transferable Watermarks

no code implementations29 Sep 2021 Ruixiang Tang, Hongye Jin, Curtis Wigington, Mengnan Du, Rajiv Jain, Xia Hu

The main idea is to insert a watermark which is only known to defender into the protected model and the watermark will then be transferred into all stolen models.

Model extraction

GrowLength: Accelerating LLMs Pretraining by Progressively Growing Training Length

no code implementations1 Oct 2023 Hongye Jin, Xiaotian Han, Jingfeng Yang, Zhimeng Jiang, Chia-Yuan Chang, Xia Hu

Our method progressively increases the training length throughout the pretraining phase, thereby mitigating computational costs and enhancing efficiency.

Chasing Fairness Under Distribution Shift: A Model Weight Perturbation Approach

1 code implementation NeurIPS 2023 Zhimeng Jiang, Xiaotian Han, Hongye Jin, Guanchu Wang, Rui Chen, Na Zou, Xia Hu

Motivated by these sufficient conditions, we propose robust fairness regularization (RFR) by considering the worst case within the model weight perturbation ball for each sensitive attribute group.

Attribute Fairness

Towards Mitigating Dimensional Collapse of Representations in Collaborative Filtering

no code implementations29 Dec 2023 Huiyuan Chen, Vivian Lai, Hongye Jin, Zhimeng Jiang, Mahashweta Das, Xia Hu

Here we propose a non-contrastive learning objective, named nCL, which explicitly mitigates dimensional collapse of representations in collaborative filtering.

Collaborative Filtering Contrastive Learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.