no code implementations • 2 Mar 2025 • Seonghyeon Lee, Heejae Chon, Joonwon Jang, Dongha Lee, Hwanjo Yu
In this work, we highlight the diversity of code generated by LMs as a critical criterion for evaluating their code generation capabilities.
no code implementations • 20 Sep 2024 • Seonghyeon Lee, Suyeon Kim, Joonwon Jang, Heejae Chon, Dongha Lee, Hwanjo Yu
Our experimental results show the effectiveness of combining the base models' auxiliary function utilization ability with the instruction following ability.
1 code implementation • 24 Aug 2024 • Heejae Chon, Seonghyeon Lee, Jinyoung Yeo, Dongha Lee
Language models (LMs) have exhibited impressive abilities in generating codes from natural language requirements.
no code implementations • 15 Mar 2024 • Seonghyeon Lee, Sanghwan Jang, Seongbo Jang, Dongha Lee, Hwanjo Yu
However, our analysis also reveals the model's underutilized behavior to call the auxiliary function, suggesting the future direction to enhance their implementation by eliciting the auxiliary function call ability encoded in the models.
1 code implementation • 27 Feb 2024 • Seongbo Jang, Seonghyeon Lee, Hwanjo Yu
As language models are often deployed as chatbot assistants, it becomes a virtue for models to engage in conversations in a user's first language.
1 code implementation • 27 Feb 2023 • Su Kim, Dongha Lee, SeongKu Kang, Seonghyeon Lee, Hwanjo Yu
In this paper, motivated by this observation, we propose TopExpert to leverage topology-specific prediction models (referred to as experts), each of which is responsible for each molecular group sharing similar topological semantics.
no code implementations • 18 Oct 2022 • Dongha Lee, Jiaming Shen, Seonghyeon Lee, Susik Yoon, Hwanjo Yu, Jiawei Han
Topic taxonomies display hierarchical topic structures of a text corpus and provide topical knowledge to enhance various NLP applications.
1 code implementation • ACL 2022 • Seonghyeon Lee, Dongha Lee, Seongbo Jang, Hwanjo Yu
In the end, we propose CLRCMD, a contrastive learning framework that optimizes RCMD of sentence pairs, which enhances the quality of sentence similarity and their interpretation.
no code implementations • 22 Nov 2021 • Dongha Lee, Su Kim, Seonghyeon Lee, Chanyoung Park, Hwanjo Yu
By the help of a global readout operation that simply aggregates all node (or node-cluster) representations, existing GNN classifiers obtain a graph-level representation of an input graph and predict its class label using the representation.
no code implementations • ACL 2021 • Seonghyeon Lee, Dongha Lee, Hwanjo Yu
Recent studies on neural networks with pre-trained weights (i. e., BERT) have mainly focused on a low-dimensional subspace, where the embedding vectors computed from input words (or their contexts) are located.
1 code implementation • 14 May 2021 • Seonghyeon Lee, Dongha Lee, Hwanjo Yu
Recent studies on neural networks with pre-trained weights (i. e., BERT) have mainly focused on a low-dimensional subspace, where the embedding vectors computed from input words (or their contexts) are located.
1 code implementation • 2 Apr 2021 • Dongha Lee, Seonghyeon Lee, Hwanjo Yu
With the increase of available time series data, predicting their class labels has been one of the most important challenges in a wide range of disciplines.
no code implementations • 26 Nov 2019 • Seonghyeon Lee, Chanyoung Park, Hwanjo Yu
We view the heterogeneous network embedding as simultaneously solving multiple tasks in which each task corresponds to each relation type in a network.