Search Results for author: Dongsub Shim

Found 11 papers, 5 papers with code

Clear Preferences Leave Traces: Reference Model-Guided Sampling for Preference Learning

no code implementations25 Jan 2025 Nirav Diwan, Tolga Ergen, Dongsub Shim, Honglak Lee

Direct Preference Optimization (DPO) has emerged as a de-facto approach for aligning language models with human preferences.

Math

Map2Text: New Content Generation from Low-Dimensional Visualizations

no code implementations24 Dec 2024 Xingjian Zhang, Ziyang Xiong, Shixuan Liu, Yutong Xie, Tolga Ergen, Dongsub Shim, Hua Xu, Honglak Lee, Qiaozhu Me

Low-dimensional visualizations, or "projection maps" of datasets, are widely used across scientific research and creative industries as effective tools for interpreting large-scale and complex information.

Navigate

MASSW: A New Dataset and Benchmark Tasks for AI-Assisted Scientific Workflows

1 code implementation10 Jun 2024 Xingjian Zhang, Yutong Xie, Jin Huang, Jinge Ma, Zhaoying Pan, Qijia Liu, Ziyang Xiong, Tolga Ergen, Dongsub Shim, Honglak Lee, Qiaozhu Mei

Scientific innovation relies on detailed workflows, which include critical steps such as analyzing literature, generating ideas, validating these ideas, interpreting results, and inspiring follow-up research.

Navigate

TOD-Flow: Modeling the Structure of Task-Oriented Dialogues

1 code implementation7 Dec 2023 Sungryull Sohn, Yiwei Lyu, Anthony Liu, Lajanugen Logeswaran, Dong-Ki Kim, Dongsub Shim, Honglak Lee

Our TOD-Flow graph learns what a model can, should, and should not predict, effectively reducing the search space and providing a rationale for the model's prediction.

Dialog Act Classification Response Generation

Code Models are Zero-shot Precondition Reasoners

no code implementations16 Nov 2023 Lajanugen Logeswaran, Sungryull Sohn, Yiwei Lyu, Anthony Zhe Liu, Dong-Ki Kim, Dongsub Shim, Moontae Lee, Honglak Lee

One of the fundamental skills required for an agent acting in an environment to complete tasks is the ability to understand what actions are plausible at any given point.

Decision Making Sequential Decision Making

Preserving Linear Separability in Continual Learning by Backward Feature Projection

1 code implementation CVPR 2023 Qiao Gu, Dongsub Shim, Florian Shkurti

To achieve a better stability-plasticity trade-off, we propose Backward Feature Projection (BFP), a method for continual learning that allows the new features to change up to a learnable linear transformation of the old features.

Continual Learning Knowledge Distillation

Towards Diverse Evaluation of Class Incremental Learning: A Representation Learning Perspective

no code implementations16 Jun 2022 Sungmin Cha, Jihwan Kwak, Dongsub Shim, Hyunwoo Kim, Moontae Lee, Honglak Lee, Taesup Moon

Class incremental learning (CIL) algorithms aim to continually learn new object classes from incrementally arriving data while not forgetting past learned classes.

class-incremental learning Class Incremental Learning +3

ExCon: Explanation-driven Supervised Contrastive Learning for Image Classification

1 code implementation28 Nov 2021 Zhibo Zhang, Jongseong Jang, Chiheb Trabelsi, Ruiwen Li, Scott Sanner, Yeonjeong Jeong, Dongsub Shim

Contrastive learning has led to substantial improvements in the quality of learned embedding representations for tasks such as image classification.

Adversarial Robustness Classification +2

Online Class-Incremental Continual Learning with Adversarial Shapley Value

3 code implementations31 Aug 2020 Dongsub Shim, Zheda Mai, Jihwan Jeong, Scott Sanner, Hyunwoo Kim, Jongseong Jang

As image-based deep learning becomes pervasive on every device, from cell phones to smart watches, there is a growing need to develop methods that continually learn from data while minimizing memory footprint and power consumption.

Continual Learning Open-Ended Question Answering

Cannot find the paper you are looking for? You can Submit a new open access paper.