Search Results for author: Huan Gui

Found 10 papers, 2 papers with code

LEVI: Generalizable Fine-tuning via Layer-wise Ensemble of Different Views

no code implementations7 Feb 2024 Yuji Roh, Qingyun Liu, Huan Gui, Zhe Yuan, Yujin Tang, Steven Euijong Whang, Liang Liu, Shuchao Bi, Lichan Hong, Ed H. Chi, Zhe Zhao

By combining two complementing models, LEVI effectively suppresses problematic features in both the fine-tuning data and pre-trained model and preserves useful features for new tasks.

Hiformer: Heterogeneous Feature Interactions Learning with Transformers for Recommender Systems

no code implementations10 Nov 2023 Huan Gui, Ruoxi Wang, Ke Yin, Long Jin, Maciej Kula, Taibai Xu, Lichan Hong, Ed H. Chi

We identify two key challenges for applying the vanilla Transformer architecture to web-scale recommender systems: (1) Transformer architecture fails to capture the heterogeneous feature interactions in the self-attention layer; (2) The serving latency of Transformer architecture might be too high to be deployed in web-scale recommender systems.

Recommendation Systems

Talking Models: Distill Pre-trained Knowledge to Downstream Models via Interactive Communication

no code implementations4 Oct 2023 Zhe Zhao, Qingyun Liu, Huan Gui, Bang An, Lichan Hong, Ed H. Chi

In this paper, we extend KD with an interactive communication process to help students of downstream tasks learn effectively from pre-trained foundation models.

Knowledge Distillation Transfer Learning

Expert Finding in Heterogeneous Bibliographic Networks with Locally-trained Embeddings

no code implementations9 Mar 2018 Huan Gui, Qi Zhu, Liyuan Liu, Aston Zhang, Jiawei Han

We study the task of expert finding in heterogeneous bibliographical networks based on two aspects: textual content analysis and authority ranking.

AspEm: Embedding Learning by Aspects in Heterogeneous Information Networks

no code implementations5 Mar 2018 Yu Shi, Huan Gui, Qi Zhu, Lance Kaplan, Jiawei Han

Therefore, we are motivated to propose a novel embedding learning framework---AspEm---to preserve the semantic information in HINs based on multiple aspects.

Link Prediction Network Embedding

Empower Sequence Labeling with Task-Aware Neural Language Model

3 code implementations13 Sep 2017 Liyuan Liu, Jingbo Shang, Frank F. Xu, Xiang Ren, Huan Gui, Jian Peng, Jiawei Han

In this study, we develop a novel neural framework to extract abundant knowledge hidden in raw texts to empower the sequence labeling task.

Language Modelling named-entity-recognition +5

Heterogeneous Supervision for Relation Extraction: A Representation Learning Approach

1 code implementation EMNLP 2017 Liyuan Liu, Xiang Ren, Qi Zhu, Shi Zhi, Huan Gui, Heng Ji, Jiawei Han

These annotations, referred as heterogeneous supervision, often conflict with each other, which brings a new challenge to the original relation extraction task: how to infer the true label from noisy labels for a given instance.

Relation Relation Extraction +1

PReP: Path-Based Relevance from a Probabilistic Perspective in Heterogeneous Information Networks

no code implementations5 Jun 2017 Yu Shi, Po-Wei Chan, Honglei Zhuang, Huan Gui, Jiawei Han

We also identify, from real-world data, and propose to model cross-meta-path synergy, which is a characteristic important for defining path-based HIN relevance and has not been modeled by existing methods.

Towards Faster Rates and Oracle Property for Low-Rank Matrix Estimation

no code implementations18 May 2015 Huan Gui, Quanquan Gu

Moreover, we rigorously show that under a certain condition on the magnitude of the nonzero singular values, the proposed estimator enjoys oracle property (i. e., exactly recovers the true rank of the matrix), besides attaining a faster rate.

Matrix Completion

Robust Tensor Decomposition with Gross Corruption

no code implementations NeurIPS 2014 Quanquan Gu, Huan Gui, Jiawei Han

In this paper, we study the statistical performance of robust tensor decomposition with gross corruption.

Tensor Decomposition

Cannot find the paper you are looking for? You can Submit a new open access paper.