Search Results for author: Buru Chang

Found 20 papers, 11 papers with code

In-Context Learning with Noisy Labels

no code implementations29 Nov 2024 Junyong Kang, Donghyun Son, Hwanjun Song, Buru Chang

In-context learning refers to the emerging ability of large language models (LLMs) to perform a target task without additional training, utilizing demonstrations of the task.

In-Context Learning Learning with noisy labels

Is 'Right' Right? Enhancing Object Orientation Understanding in Multimodal Language Models through Egocentric Instruction Tuning

1 code implementation24 Nov 2024 Ji Hyeok Jung, Eun Tae Kim, Seo Yeon Kim, Joo Ho Lee, Bumsoo Kim, Buru Chang

Multimodal large language models (MLLMs) act as essential interfaces, connecting humans with AI technologies in multimodal applications.

SHARE: Shared Memory-Aware Open-Domain Long-Term Dialogue Dataset Constructed from Movie Script

no code implementations28 Oct 2024 Eunwon Kim, Chanho Park, Buru Chang

We also introduce EPISODE, a long-term dialogue framework based on SHARE that utilizes shared experiences between individuals.

CUPID: A Real-Time Session-Based Reciprocal Recommendation System for a One-on-One Social Discovery Platform

no code implementations8 Oct 2024 Beomsu Kim, SangBum Kim, Minchan Kim, Joonyoung Yi, Sungjoo Ha, Suhyun Lee, Youngsoo Lee, Gihun Yeom, Buru Chang, Gihun Lee

However, conventional session-based approaches struggle with high latency due to the demands of modeling sequential user behavior for each recommendation process.

Recommendation Systems

ConVis: Contrastive Decoding with Hallucination Visualization for Mitigating Hallucinations in Multimodal Large Language Models

1 code implementation25 Aug 2024 Yeji Park, Deokyeong Lee, Junsuk Choe, Buru Chang

Hallucinations in Multimodal Large Language Models (MLLMs) where generated responses fail to accurately reflect the given image pose a significant challenge to their reliability.

Hallucination

Review-driven Personalized Preference Reasoning with Large Language Models for Recommendation

1 code implementation12 Aug 2024 Jieyong Kim, Hyunseo Kim, Hyunjin Cho, SeongKu Kang, Buru Chang, Jinyoung Yeo, Dongha Lee

Recent advancements in Large Language Models (LLMs) have demonstrated exceptional performance across a wide range of tasks, generating significant interest in their application to recommendation systems.

Prediction Recommendation Systems

ESREAL: Exploiting Semantic Reconstruction to Mitigate Hallucinations in Vision-Language Models

no code implementations24 Mar 2024 Minchan Kim, Minyeong Kim, Junik Bae, Suhwan Choi, Sungkyung Kim, Buru Chang

Subsequently, ESREAL computes token-level hallucination scores by assessing the semantic similarity of aligned regions based on the type of hallucination.

Hallucination Semantic Similarity +1

Gradient Estimation for Unseen Domain Risk Minimization with Pre-Trained Models

no code implementations3 Feb 2023 Byounggyu Lew, Donghyun Son, Buru Chang

Although the task-specific knowledge could be learned from source domains by fine-tuning, this hurts the generalization power of pre-trained models due to gradient bias toward the source domains.

Domain Generalization Model Optimization

TiDAL: Learning Training Dynamics for Active Learning

1 code implementation ICCV 2023 Seong Min Kye, Kwanghee Choi, Hyeongmin Byun, Buru Chang

Active learning (AL) aims to select the most useful data samples from an unlabeled data pool and annotate them to expand the labeled dataset under a limited budget.

Active Learning

Measuring and Improving Semantic Diversity of Dialogue Generation

1 code implementation11 Oct 2022 Seungju Han, Beomsu Kim, Buru Chang

In this paper, we introduce a new automatic evaluation metric to measure the semantic diversity of generated responses.

Dialogue Generation Diversity

Reliable Decision from Multiple Subtasks through Threshold Optimization: Content Moderation in the Wild

1 code implementation16 Aug 2022 Donghyun Son, Byounggyu Lew, Kwanghee Choi, Yongsu Baek, Seungwoo Choi, Beomjun Shin, Sungjoo Ha, Buru Chang

In this study, we formulate real-world scenarios of content moderation and introduce a simple yet effective threshold optimization method that searches the optimal thresholds of the multiple subtasks to make a reliable moderation decision in a cost-effective way.

Meet Your Favorite Character: Open-domain Chatbot Mimicking Fictional Characters with only a Few Utterances

1 code implementation NAACL 2022 Seungju Han, Beomsu Kim, Jin Yong Yoo, Seokjun Seo, SangBum Kim, Enkhbayar Erdenee, Buru Chang

To better reflect the style of the character, PDP builds the prompts in the form of dialog that includes the character's utterances as dialog history.

Chatbot Retrieval

Understanding and Improving the Exemplar-based Generation for Open-domain Conversation

1 code implementation NLP4ConvAI (ACL) 2022 Seungju Han, Beomsu Kim, Seokjun Seo, Enkhbayar Erdenee, Buru Chang

Extensive experiments demonstrate that our proposed training method alleviates the drawbacks of the existing exemplar-based generative models and significantly improves the performance in terms of appropriateness and informativeness.

Informativeness Retrieval

Temporal Knowledge Distillation for On-device Audio Classification

no code implementations27 Oct 2021 Kwanghee Choi, Martin Kersner, Jacob Morton, Buru Chang

Improving the performance of on-device audio classification models remains a challenge given the computational limits of the mobile environment.

Audio Classification Event Detection +2

Distilling the Knowledge of Large-scale Generative Models into Retrieval Models for Efficient Open-domain Conversation

1 code implementation Findings (EMNLP) 2021 Beomsu Kim, Seokjun Seo, Seungju Han, Enkhbayar Erdenee, Buru Chang

G2R consists of two distinct techniques of distillation: the data-level G2R augments the dialogue dataset with additional responses generated by the large-scale generative model, and the model-level G2R transfers the response quality score assessed by the generative model to the score of the retrieval model by the knowledge distillation loss.

Knowledge Distillation Retrieval

Efficient Click-Through Rate Prediction for Developing Countries via Tabular Learning

no code implementations15 Apr 2021 Joonyoung Yi, Buru Chang

Despite the rapid growth of online advertisement in developing countries, existing highly over-parameterized Click-Through Rate (CTR) prediction models are difficult to be deployed due to the limited computing resources.

Click-Through Rate Prediction Prediction

"Killing Me" Is Not a Spoiler: Spoiler Detection Model using Graph Neural Networks with Dependency Relation-Aware Attention Mechanism

no code implementations15 Jan 2021 Buru Chang, Inggeol Lee, Hyunjae Kim, Jaewoo Kang

Several machine learning-based spoiler detection models have been proposed recently to protect users from spoilers on review websites.

BIG-bench Machine Learning

Disentangling Label Distribution for Long-tailed Visual Recognition

2 code implementations CVPR 2021 Youngkyu Hong, Seungju Han, Kwanghee Choi, Seokjun Seo, Beomsu Kim, Buru Chang

Although this method surpasses state-of-the-art methods on benchmark datasets, it can be further improved by directly disentangling the source label distribution from the model prediction in the training phase.

Image Classification Long-tail Learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.