1 code implementation • 17 Oct 2024 • Takeru Miyato, Sindy Löwe, Andreas Geiger, Max Welling
It has long been known in both neuroscience and AI that ``binding'' between neurons leads to a form of competitive learning where representations are compressed in order to represent more abstract concepts in deeper layers of the network.
1 code implementation • 16 Oct 2023 • Takeru Miyato, Bernhard Jaeger, Max Welling, Andreas Geiger
As transformers are equivariant to the permutation of input tokens, encoding the positional information of tokens is necessary for many tasks.
no code implementations • 29 May 2023 • Masanori Koyama, Kenji Fukumizu, Kohei Hayashi, Takeru Miyato
Symmetry learning has proven to be an effective approach for extracting the hidden structure of data, with the concept of equivariance relation playing the central role.
no code implementations • 13 Oct 2022 • Masanori Koyama, Takeru Miyato, Kenji Fukumizu
Recent years have witnessed the effectiveness of contrastive learning in obtaining the representation of dataset that is useful in interpretation and downstream tasks.
1 code implementation • 12 Oct 2022 • Takeru Miyato, Masanori Koyama, Kenji Fukumizu
In this study, we present meta-sequential prediction (MSP), an unsupervised framework to learn the symmetry from the time sequence of length at least three.
no code implementations • 15 Nov 2021 • Masanori Koyama, Kentaro Minami, Takeru Miyato, Yarin Gal
In contrastive representation learning, data representation is trained so that it can classify the image instances even when the images are altered by augmentations.
no code implementations • NeurIPS 2019 • Amir Najafi, Shin-ichi Maeda, Masanori Koyama, Takeru Miyato
What is the role of unlabeled data in an inference problem, when the presumed underlying distribution is adversarially perturbed?
no code implementations • ICLR 2019 • Ken Nakanishi, Shin-ichi Maeda, Takeru Miyato, Masanori Koyama
We propose Adaptive Sample-space & Adaptive Probability (ASAP) coding, an efficient neural-network based method for lossy data compression.
1 code implementation • 26 Nov 2018 • Ryohei Suzuki, Masanori Koyama, Takeru Miyato, Taizan Yonetsuji, Huachun Zhu
We present a novel CNN-based image editing strategy that allows the user to change the semantic information of an image over an arbitrary region by manipulating the feature-space representation of the image in a trained GAN model.
no code implementations • 16 May 2018 • Ken Nakanishi, Shin-ichi Maeda, Takeru Miyato, Daisuke Okanohara
This study presents a new lossy image compression method that utilizes the multi-scale features of natural images.
38 code implementations • ICLR 2018 • Takeru Miyato, Toshiki Kataoka, Masanori Koyama, Yuichi Yoshida
One of the challenges in the study of generative adversarial networks is the instability of its training.
Ranked #24 on Image Generation on STL-10
12 code implementations • ICLR 2018 • Takeru Miyato, Masanori Koyama
We propose a novel, projection based way to incorporate the conditional information into the discriminator of GANs that respects the role of the conditional information in the underlining probabilistic model.
Ranked #15 on Conditional Image Generation on CIFAR-10
no code implementations • 20 Nov 2017 • Jiren Jin, Richard G. Calland, Takeru Miyato, Brian K. Vogel, Hideki Nakayama
Unsupervised domain adaptation (UDA) aims to utilize labeled data from a source domain to learn a model that generalizes to a target domain of unlabeled data.
no code implementations • 31 May 2017 • Yuichi Yoshida, Takeru Miyato
We investigate the generalizability of deep learning based on the sensitivity to input perturbation.
14 code implementations • 13 Apr 2017 • Takeru Miyato, Shin-ichi Maeda, Masanori Koyama, Shin Ishii
In our experiments, we applied VAT to supervised and semi-supervised learning tasks on multiple benchmark datasets.
2 code implementations • ICML 2017 • Weihua Hu, Takeru Miyato, Seiya Tokui, Eiichi Matsumoto, Masashi Sugiyama
Learning discrete representations of data is a central machine learning task because of the compactness of the representations and ease of interpretation.
Ranked #3 on Unsupervised Image Classification on SVHN (using extra training data)
4 code implementations • 25 May 2016 • Takeru Miyato, Andrew M. Dai, Ian Goodfellow
We extend adversarial and virtual adversarial training to the text domain by applying perturbations to the word embeddings in a recurrent neural network rather than to the original input itself.
Ranked #22 on Sentiment Analysis on IMDb
General Classification Semi-Supervised Text Classification +2
5 code implementations • 2 Jul 2015 • Takeru Miyato, Shin-ichi Maeda, Masanori Koyama, Ken Nakae, Shin Ishii
We propose local distributional smoothness (LDS), a new notion of smoothness for statistical model that can be used as a regularization term to promote the smoothness of the model distribution.