Search Results for author: Hongyin Luo

Found 23 papers, 17 papers with code

HALC: Object Hallucination Reduction via Adaptive Focal-Contrast Decoding

1 code implementation1 Mar 2024 Zhaorun Chen, Zhuokai Zhao, Hongyin Luo, Huaxiu Yao, Bo Li, Jiawei Zhou

While large vision-language models (LVLMs) have demonstrated impressive capabilities in interpreting multi-modal contexts, they invariably suffer from object hallucinations (OH).

Hallucination Object +1

Self-Specialization: Uncovering Latent Expertise within Large Language Models

no code implementations29 Sep 2023 Junmo Kang, Hongyin Luo, Yada Zhu, James Glass, David Cox, Alan Ritter, Rogerio Feris, Leonid Karlinsky

Recent works have demonstrated the effectiveness of self-alignment in which a large language model is, by itself, aligned to follow general instructions through the automatic generation of instructional data using a handful of human-written seeds.

Hallucination Instruction Following +2

Joint Audio and Speech Understanding

1 code implementation25 Sep 2023 Yuan Gong, Alexander H. Liu, Hongyin Luo, Leonid Karlinsky, James Glass

Humans are surrounded by audio signals that include both speech and non-speech sounds.

DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models

2 code implementations7 Sep 2023 Yung-Sung Chuang, Yujia Xie, Hongyin Luo, Yoon Kim, James Glass, Pengcheng He

Despite their impressive capabilities, large language models (LLMs) are prone to hallucinations, i. e., generating content that deviates from facts seen during pretraining.

Entailment as Robust Self-Learner

1 code implementation26 May 2023 Jiaxin Ge, Hongyin Luo, Yoon Kim, James Glass

Experiments on binary and multi-class classification tasks show that SimPLE leads to more robust self-training results, indicating that the self-trained entailment models are more efficient and trustworthy than large language models on language understanding tasks.

Multi-class Classification Natural Language Understanding +1

SAIL: Search-Augmented Instruction Learning

no code implementations24 May 2023 Hongyin Luo, Yung-Sung Chuang, Yuan Gong, Tianhua Zhang, Yoon Kim, Xixin Wu, Danny Fox, Helen Meng, James Glass

Large language models (LLMs) have been significantly improved by instruction fine-tuning, but still lack transparency and the ability to utilize up-to-date knowledge and information.

Denoising Fact Checking +3

Listen, Think, and Understand

1 code implementation18 May 2023 Yuan Gong, Hongyin Luo, Alexander H. Liu, Leonid Karlinsky, James Glass

On the other hand, modern large language models (LLMs) exhibit emerging reasoning ability but they lack audio perception capabilities.

Ranked #3 on Music Question Answering on MusicQA (using extra training data)

Language Modelling Large Language Model +1

Chain of Thought Prompt Tuning in Vision Language Models

no code implementations16 Apr 2023 Jiaxin Ge, Hongyin Luo, Siyuan Qian, Yulu Gan, Jie Fu, Shanghang Zhang

Chain of Thought is a simple and effective approximation to human reasoning process and has been proven useful for natural language processing (NLP) tasks.

Domain Generalization Image Classification +4

Interpretable Unified Language Checking

1 code implementation7 Apr 2023 Tianhua Zhang, Hongyin Luo, Yung-Sung Chuang, Wei Fang, Luc Gaitskell, Thomas Hartvigsen, Xixin Wu, Danny Fox, Helen Meng, James Glass

Despite recent concerns about undesirable behaviors generated by large language models (LLMs), including non-factual, biased, and hateful language, we find LLMs are inherent multi-task language checkers based on their latent representations of natural and social knowledge.

Fact Checking Fairness +2

Logic Against Bias: Textual Entailment Mitigates Stereotypical Sentence Reasoning

1 code implementation10 Mar 2023 Hongyin Luo, James Glass

Due to their similarity-based learning objectives, pretrained sentence encoders often internalize stereotypical assumptions that reflect the social biases that exist within their training corpora.

Natural Language Inference Sentence +1

Meta-learning for downstream aware and agnostic pretraining

no code implementations6 Jun 2021 Hongyin Luo, Shuyan Dong, Yung-Sung Chuang, Shang-Wen Li

Neural network pretraining is gaining attention due to its outstanding performance in natural language processing applications.

Meta-Learning

Cooperative Self-training of Machine Reading Comprehension

1 code implementation NAACL 2022 Hongyin Luo, Shang-Wen Li, Mingye Gao, Seunghak Yu, James Glass

Pretrained language models have significantly improved the performance of downstream language understanding tasks, including extractive question answering, by providing high-quality contextualized word embeddings.

Extractive Question-Answering Machine Reading Comprehension +6

Knowledge Grounded Conversational Symptom Detection with Graph Memory Networks

1 code implementation EMNLP (ClinicalNLP) 2020 Hongyin Luo, Shang-Wen Li, James Glass

Given a set of explicit symptoms provided by the patient to initiate a dialog for diagnosing, the system is trained to collect implicit symptoms by asking questions, in order to collect more information for making an accurate diagnosis.

Goal-Oriented Dialog

Prototypical Q Networks for Automatic Conversational Diagnosis and Few-Shot New Disease Adaption

no code implementations19 May 2020 Hongyin Luo, Shang-Wen Li, James Glass

Experiments showed that the ProtoQN significantly outperformed the baseline DQN model in both supervised and few-shot learning scenarios, and achieves state-of-the-art few-shot learning performances.

Few-Shot Learning

Language Modeling with Graph Temporal Convolutional Networks

no code implementations ICLR 2019 Hongyin Luo, Yichen Li, Jie Fu, James Glass

Recently, there have been some attempts to use non-recurrent neural models for language modeling.

Language Modelling

Learning Word Representations with Cross-Sentence Dependency for End-to-End Co-reference Resolution

1 code implementation EMNLP 2018 Hongyin Luo, Jim Glass

In this work, we present a word embedding model that learns cross-sentence dependency for improving end-to-end co-reference resolution (E2E-CR).

Coreference Resolution Sentence

Adaptive Bidirectional Backpropagation: Towards Biologically Plausible Error Signal Transmission in Neural Networks

2 code implementations23 Feb 2017 Hongyin Luo, Jie Fu, James Glass

However, it has been argued that this is not biologically plausible because back-propagating error signals with the exact incoming weights are not considered possible in biological neural systems.

DrMAD: Distilling Reverse-Mode Automatic Differentiation for Optimizing Hyperparameters of Deep Neural Networks

1 code implementation5 Jan 2016 Jie Fu, Hongyin Luo, Jiashi Feng, Kian Hsiang Low, Tat-Seng Chua

The performance of deep neural networks is well-known to be sensitive to the setting of their hyperparameters.

Cannot find the paper you are looking for? You can Submit a new open access paper.