Search Results for author: Jinheon Baek

Found 23 papers, 15 papers with code

ResearchAgent: Iterative Research Idea Generation over Scientific Literature with Large Language Models

no code implementations11 Apr 2024 Jinheon Baek, Sujay Kumar Jauhar, Silviu Cucerzan, Sung Ju Hwang

Scientific Research, vital for improving human life, is hindered by its inherent complexity, slow pace, and the need for specialized experts.

Language Modelling Large Language Model +1

Adaptive-RAG: Learning to Adapt Retrieval-Augmented Large Language Models through Question Complexity

1 code implementation21 Mar 2024 Soyeong Jeong, Jinheon Baek, Sukmin Cho, Sung Ju Hwang, Jong C. Park

Retrieval-Augmented Large Language Models (LLMs), which incorporate the non-parametric knowledge from external knowledge bases into LLMs, have emerged as a promising approach to enhancing response accuracy in several tasks, such as Question-Answering (QA).

Question Answering Retrieval

Retrieval-Augmented Data Augmentation for Low-Resource Domain Tasks

no code implementations21 Feb 2024 Minju Seo, Jinheon Baek, James Thorne, Sung Ju Hwang

Many existing works tackle this problem by generating synthetic data from the training data and then training models on them, recently using Large Language Models (LLMs).

Data Augmentation Retrieval

Knowledge-Augmented Large Language Models for Personalized Contextual Query Suggestion

no code implementations10 Nov 2023 Jinheon Baek, Nirupama Chandrasekaran, Silviu Cucerzan, Allen herring, Sujay Kumar Jauhar

Specifically, we construct an entity-centric knowledge store for each user based on their search and browsing activities on the web, which is then leveraged to provide contextually relevant LLM prompt augmentations.

Knowledge Graphs

Test-Time Self-Adaptive Small Language Models for Question Answering

1 code implementation20 Oct 2023 Soyeong Jeong, Jinheon Baek, Sukmin Cho, Sung Ju Hwang, Jong C. Park

Moreover, further finetuning LMs with labeled datasets is often infeasible due to their absence, but it is also questionable if we can transfer smaller LMs having limited knowledge only with unlabeled test data.

General Knowledge Question Answering

Knowledge-Augmented Language Model Verification

1 code implementation19 Oct 2023 Jinheon Baek, Soyeong Jeong, Minki Kang, Jong C. Park, Sung Ju Hwang

Recent Language Models (LMs) have shown impressive capabilities in generating texts with the knowledge internalized in parameters.

Language Modelling Question Answering +1

Phrase Retrieval for Open-Domain Conversational Question Answering with Conversational Dependency Modeling via Contrastive Learning

1 code implementation7 Jun 2023 Soyeong Jeong, Jinheon Baek, Sung Ju Hwang, Jong C. Park

To address this problem, we further introduce a novel contrastive learning strategy, making sure to reflect previous turns when retrieving the phrase for the current context, by maximizing representational similarities of consecutive turns in a conversation while minimizing irrelevant conversational contexts.

Contrastive Learning Conversational Question Answering +1

Knowledge-Augmented Language Model Prompting for Zero-Shot Knowledge Graph Question Answering

no code implementations7 Jun 2023 Jinheon Baek, Alham Fikri Aji, Amir Saffari

We validate the performance of our KAPING framework on the knowledge graph question answering task, that aims to answer the user's question based on facts over a knowledge graph, on which ours outperforms relevant zero-shot baselines by up to 48% in average, across multiple LLMs of various sizes.

Graph Question Answering Language Modelling +1

Knowledge Graph-Augmented Language Models for Knowledge-Grounded Dialogue Generation

no code implementations30 May 2023 Minki Kang, Jin Myung Kwak, Jinheon Baek, Sung Ju Hwang

To overcome this limitation, we propose SUbgraph Retrieval-augmented GEneration (SURGE), a framework for generating context-relevant and knowledge-grounded dialogues with the KG.

Contrastive Learning Dialogue Generation +3

Knowledge-Augmented Reasoning Distillation for Small Language Models in Knowledge-Intensive Tasks

1 code implementation NeurIPS 2023 Minki Kang, Seanie Lee, Jinheon Baek, Kenji Kawaguchi, Sung Ju Hwang

Large Language Models (LLMs) have shown promising performance in knowledge-intensive reasoning tasks that require a compound understanding of knowledge.

Memorization StrategyQA

Direct Fact Retrieval from Knowledge Graphs without Entity Linking

no code implementations21 May 2023 Jinheon Baek, Alham Fikri Aji, Jens Lehmann, Sung Ju Hwang

There has been a surge of interest in utilizing Knowledge Graphs (KGs) for various natural language processing/understanding tasks.

Entity Disambiguation Entity Linking +5

Realistic Conversational Question Answering with Answer Selection based on Calibrated Confidence and Uncertainty Measurement

1 code implementation10 Feb 2023 Soyeong Jeong, Jinheon Baek, Sung Ju Hwang, Jong C. Park

Conversational Question Answering (ConvQA) models aim at answering a question with its relevant paragraph and previous question-answer pairs that occurred during conversation multiple times.

Answer Selection Conversational Question Answering

Object Detection in Aerial Images with Uncertainty-Aware Graph Network

no code implementations23 Aug 2022 Jongha Kim, Jinheon Baek, Sung Ju Hwang

To achieve this, we first detect objects and then measure their semantic and spatial distances to construct an object graph, which is then represented by a graph neural network (GNN) for refining visual CNN features for objects.

Object object-detection +1

Personalized Subgraph Federated Learning

1 code implementation21 Jun 2022 Jinheon Baek, Wonyong Jeong, Jiongdao Jin, Jaehong Yoon, Sung Ju Hwang

To this end, we introduce a new subgraph FL problem, personalized subgraph FL, which focuses on the joint improvement of the interrelated local GNNs rather than learning a single global model, and propose a novel framework, FEDerated Personalized sUBgraph learning (FED-PUB), to tackle it.

Federated Learning

KALA: Knowledge-Augmented Language Model Adaptation

1 code implementation NAACL 2022 Minki Kang, Jinheon Baek, Sung Ju Hwang

Pre-trained language models (PLMs) have achieved remarkable success on various natural language understanding tasks.

Domain Adaptation General Knowledge +6

Augmenting Document Representations for Dense Retrieval with Interpolation and Perturbation

1 code implementation ACL 2022 Soyeong Jeong, Jinheon Baek, Sukmin Cho, Sung Ju Hwang, Jong C. Park

Dense retrieval models, which aim at retrieving the most relevant document for an input query on a dense representation space, have gained considerable attention for their remarkable success.

Data Augmentation Passage Retrieval +1

Graph Self-supervised Learning with Accurate Discrepancy Learning

1 code implementation7 Feb 2022 DongKi Kim, Jinheon Baek, Sung Ju Hwang

Contrastive learning, while it can learn global graph-level similarities, its objective to maximize the similarity between two differently perturbed graphs may result in representations that cannot discriminate two similar graphs with different properties.

Contrastive Learning Link Prediction +4

Unsupervised Document Expansion for Information Retrieval with Stochastic Text Generation

1 code implementation NAACL (sdp) 2021 Soyeong Jeong, Jinheon Baek, ChaeHun Park, Jong C. Park

In this paper, we propose an Unsupervised Document Expansion with Generation (UDEG) framework with a pre-trained language model, which generates diverse supplementary sentences for the original document without using labels on query-document pairs for training.

Information Retrieval Language Modelling +2

Task-Adaptive Neural Network Search with Meta-Contrastive Learning

1 code implementation NeurIPS 2021 Wonyong Jeong, Hayeon Lee, Gun Park, Eunyoung Hyung, Jinheon Baek, Sung Ju Hwang

To address such limitations, we introduce a novel problem of \emph{Neural Network Search} (NNS), whose goal is to search for the optimal pretrained network for a novel dataset and constraints (e. g. number of parameters), from a model zoo.

Contrastive Learning Meta-Learning +1

Accurate Learning of Graph Representations with Graph Multiset Pooling

1 code implementation ICLR 2021 Jinheon Baek, Minki Kang, Sung Ju Hwang

Graph neural networks have been widely used on modeling graph data, achieving impressive results on node classification and link prediction tasks.

Graph Classification Graph Clustering +5

Learning to Extrapolate Knowledge: Transductive Few-shot Out-of-Graph Link Prediction

1 code implementation NeurIPS 2020 Jinheon Baek, Dong Bok Lee, Sung Ju Hwang

For transductive link prediction, we further propose a stochastic embedding layer to model uncertainty in the link prediction between unseen entities.

graph construction Knowledge Graph Completion +2

Cannot find the paper you are looking for? You can Submit a new open access paper.