1 code implementation • 19 Oct 2023 • Yuya Yoshikawa, Tomoharu Iwata
To improve the faithfulness, we propose insertion/deletion metric-aware explanation-based optimization (ID-ExpO), which optimizes differentiable predictors to improve both the insertion and deletion scores of the explanations while maintaining their predictive accuracy.
no code implementations • 15 Aug 2023 • Yuya Yoshikawa, Yutaro Shigeto, Masashi Shimbo, Akikazu Takeuchi
The Meta Video Dataset (MetaVD) provides annotated relations between action classes in major datasets for human action recognition in videos.
1 code implementation • CVPR 2023 • Yutaro Shigeto, Masashi Shimbo, Yuya Yoshikawa, Akikazu Takeuchi
Barlow Twins and VICReg are self-supervised representation learning models that use regularizers to decorrelate features.
1 code implementation • IEEE Transactions on Neural Networks and Learning Systems 2021 • Yuya Yoshikawa, Tomoharu Iwata
In the proposed model, both the prediction and explanation for each sample are performed using an easy-to-interpret locally linear model.
no code implementations • 7 Dec 2021 • Tomoharu Iwata, Yuya Yoshikawa
For improving the interpretability, reducing the number of examples in the explanation model is important.
1 code implementation • Computer Vision and Image Understanding 2021 • Yuya Yoshikawa, Yutaro Shigeto, Akikazu Takeuchi
To realize this solution, we constructed a meta video dataset from the existing datasets for human action recognition, referred to as MetaVD.
no code implementations • 3 Jul 2020 • Yuya Yoshikawa, Tomoharu Iwata
In the proposed model, both the prediction and explanation for each sample are performed using an easy-to-interpret locally linear model.
no code implementations • 13 Mar 2020 • Yuya Yoshikawa, Tomoharu Iwata
Additionally, the prediction is interpretable because it is obtained by the inner product between the simplified representations and the sparse weights, where only a small number of weights are selected by our gate module in the NGSLL.
no code implementations • LREC 2020 • Yutaro Shigeto, Yuya Yoshikawa, Jiaqing Lin, Akikazu Takeuchi
Each caption in our dataset describes a video in the form of "who does what and where."
1 code implementation • 12 Apr 2018 • Yuya Yoshikawa, Jiaqing Lin, Akikazu Takeuchi
A new large-scale video dataset for human action recognition, called STAIR Actions is introduced.
no code implementations • 1 Feb 2018 • Yuya Yoshikawa, Yusaku Imai
In this paper, we propose a nonparametric delayed feedback model for CVR prediction that represents the distribution of the time delay without assuming a parametric distribution, such as an exponential or Weibull distribution.
no code implementations • 11 Aug 2017 • Yuya Yoshikawa
In this problem, each instance with a feature vector belongs to at least one group.
1 code implementation • ACL 2017 • Yuya Yoshikawa, Yutaro Shigeto, Akikazu Takeuchi
In recent years, automatic generation of image descriptions (captions), that is, image captioning, has attracted a great deal of attention.
no code implementations • NeurIPS 2015 • Yuya Yoshikawa, Tomoharu Iwata, Hiroshi Sawada, Takeshi Yamada
We propose a kernel-based method for finding matching between instances across different domains, such as multilingual documents and images with annotations.
no code implementations • NeurIPS 2014 • Yuya Yoshikawa, Tomoharu Iwata, Hiroshi Sawada
With the latent SMM, a latent vector is associated with each vocabulary term, and each document is represented as a distribution of the latent vectors for words appearing in the document.