Search Results for author: Sungchul Kim

Found 40 papers, 11 papers with code

Few-Shot Class-Incremental Learning for Named Entity Recognition

no code implementations ACL 2022 Rui Wang, Tong Yu, Handong Zhao, Sungchul Kim, Subrata Mitra, Ruiyi Zhang, Ricardo Henao

In this work, we study a more challenging but practical problem, i. e., few-shot class-incremental learning for NER, where an NER model is trained with only few labeled samples of the new classes, without forgetting knowledge of the old ones.

Few-Shot Class-Incremental Learning Incremental Learning +3

Hallucination Diversity-Aware Active Learning for Text Summarization

no code implementations2 Apr 2024 Yu Xia, Xu Liu, Tong Yu, Sungchul Kim, Ryan A. Rossi, Anup Rao, Tung Mai, Shuai Li

Large Language Models (LLMs) have shown propensity to generate hallucinated outputs, i. e., texts that are factually incorrect or unsupported.

Active Learning Hallucination +1

Which LLM to Play? Convergence-Aware Online Model Selection with Time-Increasing Bandits

no code implementations11 Mar 2024 Yu Xia, Fang Kong, Tong Yu, Liya Guo, Ryan A. Rossi, Sungchul Kim, Shuai Li

In this paper, we propose a time-increasing bandit algorithm TI-UCB, which effectively predicts the increase of model performances due to finetuning and efficiently balances exploration and exploitation in model selection.

Change Detection Model Selection

Learning to Reduce: Optimal Representations of Structured Data in Prompting Large Language Models

no code implementations22 Feb 2024 Younghun Lee, Sungchul Kim, Tong Yu, Ryan A. Rossi, Xiang Chen

The model learns to reduce the input context using On-Policy Reinforcement Learning and aims to improve the reasoning performance of a fixed LLM.

Language Modelling

Self-Debiasing Large Language Models: Zero-Shot Recognition and Reduction of Stereotypes

no code implementations3 Feb 2024 Isabel O. Gallegos, Ryan A. Rossi, Joe Barrow, Md Mehrab Tanjim, Tong Yu, Hanieh Deilamsalehy, Ruiyi Zhang, Sungchul Kim, Franck Dernoncourt

Large language models (LLMs) have shown remarkable advances in language generation and understanding but are also prone to exhibiting harmful social biases.

Text Generation Zero-Shot Learning

Augment before You Try: Knowledge-Enhanced Table Question Answering via Table Expansion

1 code implementation28 Jan 2024 Yujian Liu, Jiabao Ji, Tong Yu, Ryan Rossi, Sungchul Kim, Handong Zhao, Ritwik Sinha, Yang Zhang, Shiyu Chang

Table question answering is a popular task that assesses a model's ability to understand and interact with structured data.

Question Answering

GPT-4 as an Effective Zero-Shot Evaluator for Scientific Figure Captions

no code implementations23 Oct 2023 Ting-Yao Hsu, Chieh-Yang Huang, Ryan Rossi, Sungchul Kim, C. Lee Giles, Ting-Hao K. Huang

We first constructed SCICAP-EVAL, a human evaluation dataset that contains human judgments for 3, 600 scientific figure captions, both original and machine-made, for 600 arXiv figures.

Bias and Fairness in Large Language Models: A Survey

1 code implementation2 Sep 2023 Isabel O. Gallegos, Ryan A. Rossi, Joe Barrow, Md Mehrab Tanjim, Sungchul Kim, Franck Dernoncourt, Tong Yu, Ruiyi Zhang, Nesreen K. Ahmed

Rapid advancements of large language models (LLMs) have enabled the processing, understanding, and generation of human-like text, with increasing integration into systems that touch our social sphere.

counterfactual Fairness

Structured Dynamic Pricing: Optimal Regret in a Global Shrinkage Model

no code implementations28 Mar 2023 Rashmi Ranjan Bhuyan, Adel Javanmard, Sungchul Kim, Gourab Mukherjee, Ryan A. Rossi, Tong Yu, Handong Zhao

We consider dynamic pricing strategies in a streamed longitudinal data set-up where the objective is to maximize, over time, the cumulative profit across a large number of customer segments.

Graph Learning with Localized Neighborhood Fairness

no code implementations22 Dec 2022 April Chen, Ryan Rossi, Nedim Lipka, Jane Hoffswell, Gromit Chan, Shunan Guo, Eunyee Koh, Sungchul Kim, Nesreen K. Ahmed

Learning fair graph representations for downstream applications is becoming increasingly important, but existing work has mostly focused on improving fairness at the global level by either modifying the graph structure or objective function without taking into account the local neighborhood of a node.

Fairness Graph Learning +2

Direct Embedding of Temporal Network Edges via Time-Decayed Line Graphs

no code implementations30 Sep 2022 Sudhanshu Chanpuriya, Ryan A. Rossi, Sungchul Kim, Tong Yu, Jane Hoffswell, Nedim Lipka, Shunan Guo, Cameron Musco

We present a simple method that avoids both shortcomings: construct the line graph of the network, which includes a node for each interaction, and weigh the edges of this graph based on the difference in time between interactions.

Edge Classification Link Prediction

Bundle MCR: Towards Conversational Bundle Recommendation

1 code implementation26 Jul 2022 Zhankui He, Handong Zhao, Tong Yu, Sungchul Kim, Fan Du, Julian McAuley

MCR, which uses a conversational paradigm to elicit user interests by asking user preferences on tags (e. g., categories or attributes) and handling user feedback across multiple rounds, is an emerging recommendation setting to acquire user feedback and narrow down the output space, but has not been explored in the context of bundle recommendation.

Recommendation Systems

CGC: Contrastive Graph Clustering for Community Detection and Tracking

1 code implementation5 Apr 2022 Namyong Park, Ryan Rossi, Eunyee Koh, Iftikhar Ahamath Burhanuddin, Sungchul Kim, Fan Du, Nesreen Ahmed, Christos Faloutsos

Especially, deep graph clustering (DGC) methods have successfully extended deep clustering to graph-structured data by learning node representations and cluster assignments in a joint optimization framework.

Clustering Community Detection +4

Automatic Forecasting via Meta-Learning

no code implementations29 Sep 2021 Mustafa Abdallah, Ryan Rossi, Kanak Mahadik, Sungchul Kim, Handong Zhao, Haoliang Wang, Saurabh Bagchi

In this work, we develop techniques for fast automatic selection of the best forecasting model for a new unseen time-series dataset, without having to first train (or evaluate) all the models on the new time-series data to select the best one.

Meta-Learning Time Series +1

Influence-guided Data Augmentation for Neural Tensor Completion

1 code implementation23 Aug 2021 Sejoon Oh, Sungchul Kim, Ryan A. Rossi, Srijan Kumar

In this paper, we propose DAIN, a general data augmentation framework that enhances the prediction accuracy of neural tensor completion methods.

Data Augmentation Imputation +2

Personalized Visualization Recommendation

no code implementations12 Feb 2021 Xin Qian, Ryan A. Rossi, Fan Du, Sungchul Kim, Eunyee Koh, Sana Malik, Tak Yeon Lee, Nesreen K. Ahmed

Visualization recommendation work has focused solely on scoring visualizations based on the underlying dataset and not the actual user and their past visualization feedback.

Learning Contextualized Knowledge Graph Structures for Commonsense Reasoning

no code implementations1 Jan 2021 Jun Yan, Mrigank Raman, Tianyu Zhang, Ryan Rossi, Handong Zhao, Sungchul Kim, Nedim Lipka, Xiang Ren

Recently, neural-symbolic architectures have achieved success on commonsense reasoning through effectively encoding relational structures retrieved from external knowledge graphs (KGs) and obtained state-of-the-art results in tasks such as (commonsense) question answering and natural language inference.

Knowledge Graphs Natural Language Inference +1

Heterogeneous Graphlets

no code implementations23 Oct 2020 Ryan A. Rossi, Nesreen K. Ahmed, Aldo Carranza, David Arbour, Anup Rao, Sungchul Kim, Eunyee Koh

Notably, since typed graphlet is more general than colored graphlet (and untyped graphlets), the counts of various typed graphlets can be combined to obtain the counts of the much simpler notion of colored graphlets.

Graph Deep Factors for Forecasting

no code implementations14 Oct 2020 Hongjie Chen, Ryan A. Rossi, Kanak Mahadik, Sungchul Kim, Hoda Eldardiry

GraphDF is a hybrid forecasting framework that consists of a relational global and relational local model.

Computational Efficiency Time Series +1

ML-based Visualization Recommendation: Learning to Recommend Visualizations from Data

no code implementations25 Sep 2020 Xin Qian, Ryan A. Rossi, Fan Du, Sungchul Kim, Eunyee Koh, Sana Malik, Tak Yeon Lee, Joel Chan

Finally, we observed a strong preference by the human experts in our user study towards the visualizations recommended by our ML-based system as opposed to the rule-based system (5. 92 from a 7-point Likert scale compared to only 3. 45).

From Static to Dynamic Node Embeddings

no code implementations21 Sep 2020 Di Jin, Sungchul Kim, Ryan A. Rossi, Danai Koutra

While previous work on dynamic modeling and embedding has focused on representing a stream of timestamped edges using a time-series of graphs based on a specific time-scale (e. g., 1 month), we propose the notion of an $\epsilon$-graph time-series that uses a fixed number of edges for each graph, and show its superiority over the time-scale representation used in previous work.

Time Series Time Series Analysis

On Proximity and Structural Role-based Embeddings in Networks: Misconceptions, Techniques, and Applications

no code implementations22 Aug 2019 Ryan A. Rossi, Di Jin, Sungchul Kim, Nesreen K. Ahmed, Danai Koutra, John Boaz Lee

Unfortunately, recent work has sometimes confused the notion of structural roles and communities (based on proximity) leading to misleading or incorrect claims about the capabilities of network embedding methods.

Misconceptions Network Embedding

Higher-Order Ranking and Link Prediction: From Closing Triangles to Closing Higher-Order Motifs

no code implementations12 Jun 2019 Ryan A. Rossi, Anup Rao, Sungchul Kim, Eunyee Koh, Nesreen K. Ahmed, Gang Wu

In this work, we investigate higher-order network motifs and develop techniques based on the notion of closing higher-order motifs that move beyond closing simple triangles.

Link Prediction

Figure Captioning with Reasoning and Sequence-Level Training

no code implementations7 Jun 2019 Charles Chen, Ruiyi Zhang, Eunyee Koh, Sungchul Kim, Scott Cohen, Tong Yu, Ryan Rossi, Razvan Bunescu

In this work, we investigate the problem of figure captioning where the goal is to automatically generate a natural language description of the figure.

Image Captioning

Dynamic Node Embeddings from Edge Streams

no code implementations12 Apr 2019 John Boaz Lee, Giang Nguyen, Ryan A. Rossi, Nesreen K. Ahmed, Eunyee Koh, Sungchul Kim

In this work, we propose using the notion of temporal walks for learning dynamic embeddings from temporal networks.

Representation Learning valid

Heterogeneous Network Motifs

no code implementations28 Jan 2019 Ryan A. Rossi, Nesreen K. Ahmed, Aldo Carranza, David Arbour, Anup Rao, Sungchul Kim, Eunyee Koh

To address this problem, we propose a fast, parallel, and space-efficient framework for counting typed graphlets in large networks.

Latent Network Summarization: Bridging Network Embedding and Summarization

1 code implementation11 Nov 2018 Di Jin, Ryan Rossi, Danai Koutra, Eunyee Koh, Sungchul Kim, Anup Rao

Motivated by the computational and storage challenges that dense embeddings pose, we introduce the problem of latent network summarization that aims to learn a compact, latent representation of the graph structure with dimensionality that is independent of the input graph size (i. e., #nodes and #edges), while retaining the ability to derive node representations on the fly.

Social and Information Networks

Higher-order Graph Convolutional Networks

no code implementations12 Sep 2018 John Boaz Lee, Ryan A. Rossi, Xiangnan Kong, Sungchul Kim, Eunyee Koh, Anup Rao

Experiments show that our proposed method is able to achieve state-of-the-art results on the semi-supervised node classification task.

General Classification Graph Attention +1

Attention Models in Graphs: A Survey

1 code implementation20 Jul 2018 John Boaz Lee, Ryan A. Rossi, Sungchul Kim, Nesreen K. Ahmed, Eunyee Koh

However, in the real-world, graphs can be both large - with many complex patterns - and noisy which can pose a problem for effective graph mining.

Graph Attention Graph Classification +2

HONE: Higher-Order Network Embeddings

no code implementations28 Jan 2018 Ryan A. Rossi, Nesreen K. Ahmed, Eunyee Koh, Sungchul Kim, Anup Rao, Yasin Abbasi Yadkori

This paper describes a general framework for learning Higher-Order Network Embeddings (HONE) from graph data based on network motifs.

Cannot find the paper you are looking for? You can Submit a new open access paper.