Search Results for author: Hyoungseob Park

Found 15 papers, 10 papers with code

WorDepth: Variational Language Prior for Monocular Depth Estimation

1 code implementation4 Apr 2024 Ziyao Zeng, Daniel Wang, Fengyu Yang, Hyoungseob Park, Yangchao Wu, Stefano Soatto, Byung-Woo Hong, Dong Lao, Alex Wong

To test this, we focus on monocular depth estimation, the problem of predicting a dense depth map from a single image, but with an additional text caption describing the scene.

3D Reconstruction Monocular Depth Estimation

Test-Time Adaptation for Depth Completion

no code implementations5 Feb 2024 Hyoungseob Park, Anjali Gupta, Alex Wong

During test time, sparse depth features are projected using this map as a proxy for source domain features and are used as guidance to train a set of auxiliary parameters (i. e., adaptation layer) to align image and sparse depth features from the target test domain to that of the source domain.

Depth Completion Test-time Adaptation

AugUndo: Scaling Up Augmentations for Unsupervised Depth Completion

no code implementations15 Oct 2023 Yangchao Wu, Tian Yu Liu, Hyoungseob Park, Stefano Soatto, Dong Lao, Alex Wong

The sparse depth modality have seen even less as intensity transformations alter the scale of the 3D scene, and geometric transformations may decimate the sparse points during resampling.

Data Augmentation Depth Completion +1

Divide-and-Conquer the NAS puzzle in Resource Constrained Federated Learning Systems

no code implementations11 May 2023 Yeshwanth Venkatesha, Youngeun Kim, Hyoungseob Park, Priyadarshini Panda

In this paper, we propose DC-NAS -- a divide-and-conquer approach that performs supernet-based Neural Architecture Search (NAS) in a federated system by systematically sampling the search space.

Federated Learning Neural Architecture Search +1

Uncovering the Representation of Spiking Neural Networks Trained with Surrogate Gradient

1 code implementation25 Apr 2023 Yuhang Li, Youngeun Kim, Hyoungseob Park, Priyadarshini Panda

However, some essential questions exist pertaining to SNNs that are little studied: Do SNNs trained with surrogate gradient learn different representations from traditional Artificial Neural Networks (ANNs)?

Exploring Temporal Information Dynamics in Spiking Neural Networks

1 code implementation26 Nov 2022 Youngeun Kim, Yuhang Li, Hyoungseob Park, Yeshwanth Venkatesha, Anna Hambitzer, Priyadarshini Panda

After training, we observe that information becomes highly concentrated in earlier few timesteps, a phenomenon we refer to as temporal information concentration.

Exploring Lottery Ticket Hypothesis in Spiking Neural Networks

1 code implementation4 Jul 2022 Youngeun Kim, Yuhang Li, Hyoungseob Park, Yeshwanth Venkatesha, Ruokai Yin, Priyadarshini Panda

To scale up a pruning technique towards deep SNNs, we investigate Lottery Ticket Hypothesis (LTH) which states that dense networks contain smaller subnetworks (i. e., winning tickets) that achieve comparable performance to the dense networks.

Addressing Client Drift in Federated Continual Learning with Adaptive Optimization

no code implementations24 Mar 2022 Yeshwanth Venkatesha, Youngeun Kim, Hyoungseob Park, Yuhang Li, Priyadarshini Panda

However, there is little attention towards additional challenges emerging when federated aggregation is performed in a continual learning system.

Continual Learning Federated Learning +1

Neuromorphic Data Augmentation for Training Spiking Neural Networks

1 code implementation11 Mar 2022 Yuhang Li, Youngeun Kim, Hyoungseob Park, Tamar Geller, Priyadarshini Panda

In an effort to minimize this generalization gap, we propose Neuromorphic Data Augmentation (NDA), a family of geometric augmentations specifically designed for event-based datasets with the goal of significantly stabilizing the SNN training and reducing the generalization gap between training and test performance.

 Ranked #1 on Event data classification on CIFAR10-DVS (using extra training data)

Contrastive Learning Data Augmentation +1

Neural Architecture Search for Spiking Neural Networks

1 code implementation23 Jan 2022 Youngeun Kim, Yuhang Li, Hyoungseob Park, Yeshwanth Venkatesha, Priyadarshini Panda

Interestingly, SNASNet found by our search algorithm achieves higher performance with backward connections, demonstrating the importance of designing SNN architecture for suitably using temporal information.

Neural Architecture Search

Robust Federated Learning with Noisy Labels

1 code implementation3 Dec 2020 Seunghan Yang, Hyoungseob Park, Junyoung Byun, Changick Kim

To solve these problems, we introduce a novel federated learning scheme that the server cooperates with local models to maintain consistent decision boundaries by interchanging class-wise centroids.

Federated Learning Learning with noisy labels

Meta Batch-Instance Normalization for Generalizable Person Re-Identification

1 code implementation CVPR 2021 Seokeon Choi, Taekyung Kim, Minki Jeong, Hyoungseob Park, Changick Kim

To this end, we combine learnable batch-instance normalization layers with meta-learning and investigate the challenging cases caused by both batch and instance normalization layers.

Data Augmentation Domain Generalization +2

Cannot find the paper you are looking for? You can Submit a new open access paper.