Search Results for author: Hyoungseob Park

Found 23 papers, 13 papers with code

Radar-Guided Polynomial Fitting for Metric Depth Estimation

no code implementations21 Mar 2025 Patrick Rim, Hyoungseob Park, Vadim Ezhov, Jeffrey Moon, Alex Wong

We propose PolyRad, a novel radar-guided depth estimation method that introduces polynomial fitting to transform scaleless depth predictions from pretrained monocular depth estimation (MDE) models into metric depth maps.

Monocular Depth Estimation

ProtoDepth: Unsupervised Continual Depth Completion with Prototypes

no code implementations CVPR 2025 Patrick Rim, Hyoungseob Park, S. Gangopadhyay, Ziyao Zeng, Younjoon Chung, Alex Wong

To extend ProtoDepth to the challenging setting where the test-time domain identity is withheld, we propose to learn domain descriptors that enable the model to select the appropriate prototype set for inference.

3D Reconstruction Continual Learning +1

TREND: Unsupervised 3D Representation Learning via Temporal Forecasting for LiDAR Perception

no code implementations4 Dec 2024 Runjian Chen, Hyoungseob Park, Bo Zhang, Wenqi Shao, Ping Luo, Alex Wong

Labeling LiDAR point clouds is notoriously time-and-energy-consuming, which spurs recent unsupervised 3D representation learning methods to alleviate the labeling burden in LiDAR perception via pretrained weights.

3D Object Detection Contrastive Learning +2

UnCLe: Unsupervised Continual Learning of Depth Completion

no code implementations23 Oct 2024 Suchisrit Gangopadhyay, Xien Chen, Michael Chu, Patrick Rim, Hyoungseob Park, Alex Wong

We find that unsupervised continual learning of depth completion is an open problem, and we invite researchers to leverage UnCLe as a development platform.

Continual Learning Depth Completion +1

RSA: Resolving Scale Ambiguities in Monocular Depth Estimators through Language Descriptions

1 code implementation3 Oct 2024 Ziyao Zeng, Yangchao Wu, Hyoungseob Park, Daniel Wang, Fengyu Yang, Stefano Soatto, Dong Lao, Byung-Woo Hong, Alex Wong

Our method, RSA, takes as input a text caption describing objects present in an image and outputs the parameters of a linear transformation which can be applied globally to a relative depth map to yield metric-scaled depth predictions.

Monocular Depth Estimation

NeuroBind: Towards Unified Multimodal Representations for Neural Signals

no code implementations19 Jul 2024 Fengyu Yang, Chao Feng, Daniel Wang, Tianye Wang, Ziyao Zeng, Zhiyang Xu, Hyoungseob Park, Pengliang Ji, Hanbin Zhao, Yuanning Li, Alex Wong

Understanding neural activity and information representation is crucial for advancing knowledge of brain function and cognition.

EEG

All-day Depth Completion

no code implementations27 May 2024 Vadim Ezhov, Hyoungseob Park, Zhaoyang Zhang, Rishi Upadhyay, Howard Zhang, Chethan Chinder Chandrappa, Achuta Kadambi, Yunhao Ba, Julie Dorsey, Alex Wong

In poorly illuminated regions where photometric intensities do not afford the inference of local shape, the coarse approximation of scene depth serves as a prior; the uncertainty map is then used with the image to guide refinement through an uncertainty-driven residual learning (URL) scheme.

All Depth Completion +2

WorDepth: Variational Language Prior for Monocular Depth Estimation

1 code implementation CVPR 2024 Ziyao Zeng, Daniel Wang, Fengyu Yang, Hyoungseob Park, Yangchao Wu, Stefano Soatto, Byung-Woo Hong, Dong Lao, Alex Wong

To test this, we focus on monocular depth estimation, the problem of predicting a dense depth map from a single image, but with an additional text caption describing the scene.

3D Reconstruction Monocular Depth Estimation

Test-Time Adaptation for Depth Completion

1 code implementation CVPR 2024 Hyoungseob Park, Anjali Gupta, Alex Wong

During test time, sparse depth features are projected using this map as a proxy for source domain features and are used as guidance to train a set of auxiliary parameters (i. e., adaptation layer) to align image and sparse depth features from the target test domain to that of the source domain.

Depth Completion Test-time Adaptation

AugUndo: Scaling Up Augmentations for Monocular Depth Completion and Estimation

no code implementations15 Oct 2023 Yangchao Wu, Tian Yu Liu, Hyoungseob Park, Stefano Soatto, Dong Lao, Alex Wong

The sparse depth modality in depth completion have seen even less use as intensity transformations alter the scale of the 3D scene, and geometric transformations may decimate the sparse points during resampling.

Data Augmentation Depth Completion +1

Divide-and-Conquer the NAS puzzle in Resource Constrained Federated Learning Systems

no code implementations11 May 2023 Yeshwanth Venkatesha, Youngeun Kim, Hyoungseob Park, Priyadarshini Panda

In this paper, we propose DC-NAS -- a divide-and-conquer approach that performs supernet-based Neural Architecture Search (NAS) in a federated system by systematically sampling the search space.

Federated Learning Neural Architecture Search +1

Uncovering the Representation of Spiking Neural Networks Trained with Surrogate Gradient

1 code implementation25 Apr 2023 Yuhang Li, Youngeun Kim, Hyoungseob Park, Priyadarshini Panda

However, some essential questions exist pertaining to SNNs that are little studied: Do SNNs trained with surrogate gradient learn different representations from traditional Artificial Neural Networks (ANNs)?

Exploring Temporal Information Dynamics in Spiking Neural Networks

1 code implementation26 Nov 2022 Youngeun Kim, Yuhang Li, Hyoungseob Park, Yeshwanth Venkatesha, Anna Hambitzer, Priyadarshini Panda

After training, we observe that information becomes highly concentrated in earlier few timesteps, a phenomenon we refer to as temporal information concentration.

Exploring Lottery Ticket Hypothesis in Spiking Neural Networks

1 code implementation4 Jul 2022 Youngeun Kim, Yuhang Li, Hyoungseob Park, Yeshwanth Venkatesha, Ruokai Yin, Priyadarshini Panda

To scale up a pruning technique towards deep SNNs, we investigate Lottery Ticket Hypothesis (LTH) which states that dense networks contain smaller subnetworks (i. e., winning tickets) that achieve comparable performance to the dense networks.

On the Viability of Monocular Depth Pre-training for Semantic Segmentation

1 code implementation26 Mar 2022 Dong Lao, Fengyu Yang, Daniel Wang, Hyoungseob Park, Samuel Lu, Alex Wong, Stefano Soatto

We choose monocular depth prediction as the geometric task, and semantic segmentation as the downstream semantic task, and design a collection of empirical tests by exploring different forms of supervision, training pipelines, and data sources for both depth pre-training and semantic fine-tuning.

Depth Prediction Image Classification +3

Addressing Client Drift in Federated Continual Learning with Adaptive Optimization

no code implementations24 Mar 2022 Yeshwanth Venkatesha, Youngeun Kim, Hyoungseob Park, Yuhang Li, Priyadarshini Panda

However, there is little attention towards additional challenges emerging when federated aggregation is performed in a continual learning system.

Continual Learning Federated Learning +1

Neuromorphic Data Augmentation for Training Spiking Neural Networks

1 code implementation11 Mar 2022 Yuhang Li, Youngeun Kim, Hyoungseob Park, Tamar Geller, Priyadarshini Panda

In an effort to minimize this generalization gap, we propose Neuromorphic Data Augmentation (NDA), a family of geometric augmentations specifically designed for event-based datasets with the goal of significantly stabilizing the SNN training and reducing the generalization gap between training and test performance.

 Ranked #1 on Event data classification on CIFAR10-DVS (using extra training data)

Contrastive Learning Data Augmentation +1

Neural Architecture Search for Spiking Neural Networks

1 code implementation23 Jan 2022 Youngeun Kim, Yuhang Li, Hyoungseob Park, Yeshwanth Venkatesha, Priyadarshini Panda

Interestingly, SNASNet found by our search algorithm achieves higher performance with backward connections, demonstrating the importance of designing SNN architecture for suitably using temporal information.

Neural Architecture Search

Robust Federated Learning with Noisy Labels

1 code implementation3 Dec 2020 Seunghan Yang, Hyoungseob Park, Junyoung Byun, Changick Kim

To solve these problems, we introduce a novel federated learning scheme that the server cooperates with local models to maintain consistent decision boundaries by interchanging class-wise centroids.

Federated Learning Learning with noisy labels

Meta Batch-Instance Normalization for Generalizable Person Re-Identification

1 code implementation CVPR 2021 Seokeon Choi, Taekyung Kim, Minki Jeong, Hyoungseob Park, Changick Kim

To this end, we combine learnable batch-instance normalization layers with meta-learning and investigate the challenging cases caused by both batch and instance normalization layers.

Data Augmentation Domain Generalization +3

Cannot find the paper you are looking for? You can Submit a new open access paper.