Search Results for author: Koichiro Niinuma

Found 7 papers, 1 papers with code

CoGS: Controllable Gaussian Splatting

no code implementations9 Dec 2023 Heng Yu, Joel Julin, Zoltán Á. Milacski, Koichiro Niinuma, László A. Jeni

We present CoGS, a method for Controllable Gaussian Splatting, that enables the direct manipulation of scene elements, offering real-time control of dynamic scenes without the prerequisite of pre-computing control signals.

Zero-Shot Video Question Answering with Procedural Programs

no code implementations1 Dec 2023 Rohan Choudhury, Koichiro Niinuma, Kris M. Kitani, László A. Jeni

We propose to answer zero-shot questions about videos by generating short procedural programs that derive a final answer from solving a sequence of visual subtasks.

Code Generation Language Modelling +6

Occlusion Sensitivity Analysis with Augmentation Subspace Perturbation in Deep Feature Space

no code implementations25 Nov 2023 Pedro Valois, Koichiro Niinuma, Kazuhiro Fukui

In this study, we introduce the Occlusion Sensitivity Analysis with Deep Feature Augmentation Subspace (OSA-DAS), a novel perturbation-based interpretability approach for computer vision.

DyLiN: Making Light Field Networks Dynamic

no code implementations CVPR 2023 Heng Yu, Joel Julin, Zoltan A. Milacski, Koichiro Niinuma, Laszlo A. Jeni

Light Field Networks, the re-formulations of radiance fields to oriented rays, are magnitudes faster than their coordinate network counterparts, and provide higher fidelity with respect to representing 3D structures from 2D observations.

Attribute Knowledge Distillation

CoNFies: Controllable Neural Face Avatars

no code implementations16 Nov 2022 Heng Yu, Koichiro Niinuma, Laszlo A. Jeni

Neural Radiance Fields (NeRF) are compelling techniques for modeling dynamic 3D scenes from 2D image collections.

Action Recognition

Adaptive occlusion sensitivity analysis for visually explaining video recognition networks

1 code implementation26 Jul 2022 Tomoki Uchiyama, Naoya Sogi, Satoshi Iizuka, Koichiro Niinuma, Kazuhiro Fukui

The key idea here is to occlude a specific volume of data by a 3D mask in an input 3D temporal-spatial data space and then measure the change degree in the output score.

Decision Making Image Classification +2

Synthetic Expressions are Better Than Real for Learning to Detect Facial Actions

no code implementations21 Oct 2020 Koichiro Niinuma, Itir Onal Ertugrul, Jeffrey F Cohn, László A Jeni

Critical obstacles in training classifiers to detect facial actions are the limited sizes of annotated video databases and the relatively low frequencies of occurrence of many actions.

Facial expression generation

Cannot find the paper you are looking for? You can Submit a new open access paper.