Search Results for author: Sungbin Shin

Found 3 papers, 3 papers with code

Rethinking Pruning Large Language Models: Benefits and Pitfalls of Reconstruction Error Minimization

1 code implementation21 Jun 2024 Sungbin Shin, Wonpyo Park, Jaeho Lee, Namhoon Lee

This work suggests fundamentally rethinking the current practice of pruning large language models (LLMs).

Critical Influence of Overparameterization on Sharpness-aware Minimization

1 code implementation29 Nov 2023 Sungbin Shin, Dongyeop Lee, Maksym Andriushchenko, Namhoon Lee

Training overparameterized neural networks often yields solutions with varying generalization capabilities, even when achieving similar training losses.

Attribute

A Closer Look at the Intervention Procedure of Concept Bottleneck Models

1 code implementation28 Feb 2023 Sungbin Shin, Yohan Jo, Sungsoo Ahn, Namhoon Lee

Concept bottleneck models (CBMs) are a class of interpretable neural network models that predict the target response of a given input based on its high-level concepts.

Fairness

Cannot find the paper you are looking for? You can Submit a new open access paper.