no code implementations • ICLR 2019 • Daeyoung Choi, Wonjong Rhee, Kyungeun Lee, Changho Shin
It has been common to argue or imply that a regularizer can be used to alter a statistical property of a hidden layer's representation and thus improve generalization or performance of deep networks.
no code implementations • 12 Apr 2024 • Changho Shin, Jitian Zhao, Sonia Cromp, Harit Vishwakarma, Frederic Sala
Popular zero-shot models suffer due to artifacts inherited from pretraining.
no code implementations • 5 Jan 2024 • Tzu-Heng Huang, Changho Shin, Sui Jiet Tay, Dyah Adila, Frederic Sala
We propose an approach for curating multimodal data that we used for our entry in the 2023 DataComp competition filtering track.
1 code implementation • 8 Sep 2023 • Dyah Adila, Changho Shin, Linrong Cai, Frederic Sala
Additionally, we demonstrate that RoboShot is compatible with a variety of pretrained and language models and propose a way to further boost performance with a zero-shot adaptation variant.
1 code implementation • NeurIPS 2023 • Changho Shin, Sonia Cromp, Dyah Adila, Frederic Sala
Weak supervision enables efficient development of training sets by reducing the need for ground truth labels.
no code implementations • ICLR 2022 • Changho Shin, Winfred Li, Harit Vishwakarma, Nicholas Roberts, Frederic Sala
We apply this technique to important problems previously not tackled by WS frameworks including learning to rank, regression, and learning in hyperbolic space.
no code implementations • 16 Nov 2018 • Changho Shin, Sunghwan Joo, Jaeryun Yim, Hyoseop Lee, Taesup Moon, Wonjong Rhee
In this work, we focus on the idea that appliances have on/off states, and develop a deep network for further performance improvements.
no code implementations • ICLR 2018 • Daeyoung Choi, Changho Shin, Hyunghun Cho, Wonjong Rhee
Performance of Deep Neural Network (DNN) heavily depends on the characteristics of hidden layer representations.