no code implementations • 27 Jan 2024 • Takanobu Furuhashi, Hidekata Hontani, Tatsuya Yokota
We propose a convex and fast signal reconstruction method for block sparsity under arbitrary linear transform with unknown block structure.
no code implementations • 19 Dec 2023 • Manabu Mukai, Hidekata Hontani, Tatsuya Yokota
In this paper, we propose a new unified optimization algorithm for general tensor decomposition which is formulated as an inverse problem for low-rank tensors in the general linear observation models.
no code implementations • 18 Mar 2022 • Tatsuya Yokota
Based on the model under inverse delay-embedding, we propose to constrain the matrix to be rank-1 with smooth factor vectors.
no code implementations • 10 Mar 2022 • Tatsuya Yokota, Hidekata Hontani
This study proposes a framework for manifold learning of image patches using the concept of equivalence classes: manifold modeling in quotient space (MMQS).
no code implementations • CVPR 2022 • Ryuki Yamamoto, Hidekata Hontani, Akira Imakura, Tatsuya Yokota
Tensor completion using multiway delay-embedding transform (MDT) (or Hankelization) suffers from the large memory requirement and high computational cost in spite of its high potentiality for the image modeling.
1 code implementation • 25 Feb 2020 • Qiquan Shi, Jiaming Yin, Jiajun Cai, Andrzej Cichocki, Tatsuya Yokota, Lei Chen, Mingxuan Yuan, Jia Zeng
This work proposes a novel approach for multiple time series forecasting.
no code implementations • ICCV 2019 • Tatsuya Yokota, Kazuya Kawai, Muneyuki Sakata, Yuichi Kimura, Hidekata Hontani
Experimental results show that the proposed method outperforms conventional methods and can extract spatial factors that represent the homogeneous tissues.
no code implementations • 25 Sep 2019 • Tatsuya Yokota, Hidekata Hontani, Qibin Zhao, Andrzej Cichocki
The proposed approach is dividing the convolution into ``delay-embedding'' and ``transformation (\ie encoder-decoder)'', and proposing a simple, but essential, image/tensor modeling method which is closely related to dynamical systems and self-similarity.
1 code implementation • 8 Aug 2019 • Tatsuya Yokota, Hidekata Hontani, Qibin Zhao, Andrzej Cichocki
The proposed approach is dividing the convolution into ``delay-embedding'' and ``transformation (\ie encoder-decoder)'', and proposing a simple, but essential, image/tensor modeling method which is closely related to dynamical systems and self-similarity.
no code implementations • CVPR 2018 • Tatsuya Yokota, Burak Erem, Seyhmus Guler, Simon K. Warfield, Hidekata Hontani
The higher-order tensor is then recovered by Tucker-based low-rank tensor factorization.
no code implementations • 10 Jan 2018 • Tatsuya Yokota, Hidekata Hontani
In the sense of trade-off tuning, the noisy tensor completion problem with the `noise inequality constraint' is better choice than the `regularization' because the good noise threshold can be easily bounded with noise standard deviation.
no code implementations • CVPR 2017 • Tatsuya Yokota, Hidekata Hontani
Tensor completion has attracted attention because of its promising ability and generality.
no code implementations • 25 May 2015 • Tatsuya Yokota, Qibin Zhao, Andrzej Cichocki
The proposed method admits significant advantages, owing to the integration of smooth PARAFAC decomposition for incomplete tensors and the efficient selection of models in order to minimize the tensor rank.