Search Results for author: Kazuki Yoshiyama

Found 6 papers, 3 papers with code

NDJIR: Neural Direct and Joint Inverse Rendering for Geometry, Lights, and Materials of Real Object

1 code implementation2 Feb 2023 Kazuki Yoshiyama, Takuya Narihira

The goal of inverse rendering is to decompose geometry, lights, and materials given pose multi-view images.

Inverse Rendering

Neural Network Libraries: A Deep Learning Framework Designed from Engineers' Perspectives

1 code implementation12 Feb 2021 Takuya Narihira, Javier Alonsogarcia, Fabien Cardinaux, Akio Hayakawa, Masato Ishii, Kazunori Iwaki, Thomas Kemp, Yoshiyuki Kobayashi, Lukas Mauch, Akira Nakamura, Yukio Obuchi, Andrew Shin, Kenji Suzuki, Stephen Tiedmann, Stefan Uhlich, Takuya Yashima, Kazuki Yoshiyama

While there exist a plethora of deep learning tools and frameworks, the fast-growing complexity of the field brings new demands and challenges, such as more flexible network design, speedy computation on distributed setting, and compatibility between different tools.

Efficient Sampling for Predictor-Based Neural Architecture Search

no code implementations24 Nov 2020 Lukas Mauch, Stephen Tiedemann, Javier Alonso Garcia, Bac Nguyen Cong, Kazuki Yoshiyama, Fabien Cardinaux, Thomas Kemp

Usually, we compute the proxy for all DNNs in the network search space and pick those that maximize the proxy as candidates for optimization.

Neural Architecture Search

Iteratively Training Look-Up Tables for Network Quantization

no code implementations13 Nov 2018 Fabien Cardinaux, Stefan Uhlich, Kazuki Yoshiyama, Javier Alonso García, Stephen Tiedemann, Thomas Kemp, Akira Nakamura

In this paper we introduce a training method, called look-up table quantization, LUT-Q, which learns a dictionary and assigns each weight to one of the dictionary's values.

object-detection Object Detection +1

Cannot find the paper you are looking for? You can Submit a new open access paper.