Search Results for author: Hongyi Liu

Found 14 papers, 6 papers with code

Explaining Context Length Scaling and Bounds for Language Models

1 code implementation3 Feb 2025 Jingzhe Shi, Qinwei Ma, Hongyi Liu, Hang Zhao, Jeng-Neng Hwang, Serge Belongie, Lei LI

In this work, we (1) propose a clean and effective theoretical framework on explaining the impact of context length to Language Modeling, from an Intrinsic Space perspective; and (2) conduct experiments on natural language and synthetic data, validating our proposed theoretical assumptions and deductions.

A review on vision-based motion estimation

no code implementations19 Jul 2024 Hongyi Liu, Haifeng Wang

In addition to the development of each branch of vision-based motion measurement methods, this paper also discussed the advantages and disadvantages of existing methods.

Motion Estimation

LoRA-as-an-Attack! Piercing LLM Safety Under The Share-and-Play Scenario

no code implementations29 Feb 2024 Hongyi Liu, Zirui Liu, Ruixiang Tang, Jiayi Yuan, Shaochen Zhong, Yu-Neng Chuang, Li Li, Rui Chen, Xia Hu

Our aim is to raise awareness of the potential risks under the emerging share-and-play scenario, so as to proactively prevent potential consequences caused by LoRA-as-an-Attack.

Named Entity Recognition Under Domain Shift via Metric Learning for Life Sciences

1 code implementation19 Jan 2024 Hongyi Liu, Qingyun Wang, Payam Karisani, Heng Ji

In our experiments, we observed that such a model is prone to mislabeling the source entities, which can often appear in the text, as the target entities.

Contrastive Learning Few-Shot Learning +4

Open-Domain Text Evaluation via Contrastive Distribution Methods

1 code implementation20 Jun 2023 Sidi Lu, Hongyi Liu, Asli Celikyilmaz, Tianlu Wang, Nanyun Peng

We investigate CDM for open-domain text generation evaluation under two paradigms: 1) _Generative_ CDM, which harnesses the contrast of two language models' distributions to generate synthetic examples for training discriminator-based metrics; 2) _Discriminative_ CDM, which directly uses distribution disparities between two language models for evaluation.

Abstractive Text Summarization Coherence Evaluation +1

Go-tuning: Improving Zero-shot Learning Abilities of Smaller Language Models

no code implementations20 Dec 2022 Jingjing Xu, Qingxiu Dong, Hongyi Liu, Lei LI

With increasing scale, large language models demonstrate both quantitative improvement and new qualitative capabilities, especially as zero-shot learners, like GPT-3.

Language Modeling Language Modelling +3

Efficient Chemical Space Exploration Using Active Learning Based on Marginalized Graph Kernel: an Application for Predicting the Thermodynamic Properties of Alkanes with Molecular Simulation

1 code implementation1 Sep 2022 Yan Xiang, Yu-Hang Tang, Zheng Gong, Hongyi Liu, Liang Wu, Guang Lin, Huai Sun

We introduce an explorative active learning (AL) algorithm based on Gaussian process regression and marginalized graph kernel (GPR-MGK) to explore chemical space with minimum cost.

Active Learning GPR +3

Multi-grained Attention Networks for Single Image Super-Resolution

no code implementations26 Sep 2019 Huapeng Wu, Zhengxia Zou, Jie Gui, Wen-Jun Zeng, Jieping Ye, Jun Zhang, Hongyi Liu, Zhihui Wei

In this paper, we make a thorough investigation on the attention mechanisms in a SR model and shed light on how simple and effective improvements on these ideas improve the state-of-the-arts.

Feature Importance Image Super-Resolution

Multilingual Visual Sentiment Concept Matching

no code implementations7 Jun 2016 Nikolaos Pappas, Miriam Redi, Mercan Topkara, Brendan Jou, Hongyi Liu, Tao Chen, Shih-Fu Chang

The impact of culture in visual emotion perception has recently captured the attention of multimedia research.

16k Clustering +2

EventNet Version 1.1 Technical Report

no code implementations24 May 2016 Dongang Wang, Zheng Shou, Hongyi Liu, Shih-Fu Chang

Finally, EventNet version 1. 1 contains 67, 641 videos, 500 events, and 5, 028 event-specific concepts.

Cannot find the paper you are looking for? You can Submit a new open access paper.