Search Results for author: Namhoon Lee

Found 14 papers, 9 papers with code

Forecasting Interactive Dynamics of Pedestrians with Fictitious Play

no code implementations CVPR 2017 Wei-Chiu Ma, De-An Huang, Namhoon Lee, Kris M. Kitani

We develop predictive models of pedestrian dynamics by encoding the coupled nature of multi-pedestrian interaction using game theory, and deep learning-based visual analysis to estimate person-specific behavior parameters.

Decision Making

Visual Compiler: Synthesizing a Scene-Specific Pedestrian Detector and Pose Estimator

no code implementations15 Dec 2016 Namhoon Lee, Xinshuo Weng, Vishnu Naresh Boddeti, Yu Zhang, Fares Beainy, Kris Kitani, Takeo Kanade

We introduce the concept of a Visual Compiler that generates a scene specific pedestrian detector and pose estimator without any pedestrian observations.

Human Detection Pose Estimation

DESIRE: Distant Future Prediction in Dynamic Scenes with Interacting Agents

3 code implementations CVPR 2017 Namhoon Lee, Wongun Choi, Paul Vernaza, Christopher B. Choy, Philip H. S. Torr, Manmohan Chandraker

DESIRE effectively predicts future locations of objects in multiple scenes by 1) accounting for the multi-modal nature of the future prediction (i. e., given the same context, future may vary), 2) foreseeing the potential future outcomes and make a strategic prediction based on that, and 3) reasoning not only from the past motion history, but also from the scene context as well as the interactions among the agents.

Future prediction Multi Future Trajectory Prediction +1

Learn To Pay Attention

4 code implementations ICLR 2018 Saumya Jetley, Nicholas A. Lord, Namhoon Lee, Philip H. S. Torr

We propose an end-to-end-trainable attention module for convolutional neural network (CNN) architectures built for image classification.

Adversarial Attack General Classification +3

SNIP: Single-shot Network Pruning based on Connection Sensitivity

8 code implementations ICLR 2019 Namhoon Lee, Thalaiyasingam Ajanthan, Philip H. S. Torr

To achieve this, we introduce a saliency criterion based on connection sensitivity that identifies structurally important connections in the network for the given task.

Image Classification Network Pruning +1

A Signal Propagation Perspective for Pruning Neural Networks at Initialization

1 code implementation ICLR 2020 Namhoon Lee, Thalaiyasingam Ajanthan, Stephen Gould, Philip H. S. Torr

Alternatively, a recent approach shows that pruning can be done at initialization prior to training, based on a saliency criterion called connection sensitivity.

Image Classification Network Pruning

Understanding the Effects of Data Parallelism and Sparsity on Neural Network Training

no code implementations ICLR 2021 Namhoon Lee, Thalaiyasingam Ajanthan, Philip H. S. Torr, Martin Jaggi

As a result, we find across various workloads of data set, network model, and optimization algorithm that there exists a general scaling trend between batch size and number of training steps to convergence for the effect of data parallelism, and further, difficulty of training under sparsity.

Network Pruning

Meta-Learning Sparse Implicit Neural Representations

1 code implementation NeurIPS 2021 Jaeho Lee, Jihoon Tack, Namhoon Lee, Jinwoo Shin

Implicit neural representations are a promising new avenue of representing general signals by learning a continuous function that, parameterized as a neural network, maps the domain of a signal to its codomain; the mapping from spatial coordinates of an image to its pixel values, for example.

Meta-Learning

SpReME: Sparse Regression for Multi-Environment Dynamic Systems

1 code implementation12 Feb 2023 Moonjeong Park, Youngbin Choi, Namhoon Lee, Dongwoo Kim

Learning dynamical systems is a promising avenue for scientific discoveries.

regression

MaskedKD: Efficient Distillation of Vision Transformers with Masked Images

no code implementations21 Feb 2023 Seungwoo Son, Namhoon Lee, Jaeho Lee

We present MaskedKD, a simple yet effective strategy that can significantly reduce the cost of distilling ViTs without sacrificing the prediction accuracy of the student model.

Knowledge Distillation

A Closer Look at the Intervention Procedure of Concept Bottleneck Models

1 code implementation28 Feb 2023 Sungbin Shin, Yohan Jo, Sungsoo Ahn, Namhoon Lee

Concept bottleneck models (CBMs) are a class of interpretable neural network models that predict the target response of a given input based on its high-level concepts.

Fairness

FedFwd: Federated Learning without Backpropagation

no code implementations3 Sep 2023 Seonghwan Park, Dahun Shin, Jinseok Chung, Namhoon Lee

In federated learning (FL), clients with limited resources can disrupt the training efficiency.

Federated Learning

Analyzing Sharpness-aware Minimization under Overparameterization

1 code implementation29 Nov 2023 Sungbin Shin, Dongyeop Lee, Maksym Andriushchenko, Namhoon Lee

Training an overparameterized neural network can yield minimizers of different generalization capabilities despite the same level of training loss.

Cannot find the paper you are looking for? You can Submit a new open access paper.