Search Results for author: Takeru Miyato

Found 18 papers, 10 papers with code

Artificial Kuramoto Oscillatory Neurons

1 code implementation17 Oct 2024 Takeru Miyato, Sindy Löwe, Andreas Geiger, Max Welling

It has long been known in both neuroscience and AI that ``binding'' between neurons leads to a form of competitive learning where representations are compressed in order to represent more abstract concepts in deeper layers of the network.

Adversarial Robustness Object Discovery +1

GTA: A Geometry-Aware Attention Mechanism for Multi-View Transformers

1 code implementation16 Oct 2023 Takeru Miyato, Bernhard Jaeger, Max Welling, Andreas Geiger

As transformers are equivariant to the permutation of input tokens, encoding the positional information of tokens is necessary for many tasks.

Novel View Synthesis

Neural Fourier Transform: A General Approach to Equivariant Representation Learning

no code implementations29 May 2023 Masanori Koyama, Kenji Fukumizu, Kohei Hayashi, Takeru Miyato

Symmetry learning has proven to be an effective approach for extracting the hidden structure of data, with the concept of equivariance relation playing the central role.

Representation Learning

Invariance-adapted decomposition and Lasso-type contrastive learning

no code implementations13 Oct 2022 Masanori Koyama, Takeru Miyato, Kenji Fukumizu

Recent years have witnessed the effectiveness of contrastive learning in obtaining the representation of dataset that is useful in interpretation and downstream tasks.

Contrastive Learning Vocal Bursts Type Prediction

Unsupervised Learning of Equivariant Structure from Sequences

1 code implementation12 Oct 2022 Takeru Miyato, Masanori Koyama, Kenji Fukumizu

In this study, we present meta-sequential prediction (MSP), an unsupervised framework to learn the symmetry from the time sequence of length at least three.

Decoder

Contrastive Representation Learning with Trainable Augmentation Channel

no code implementations15 Nov 2021 Masanori Koyama, Kentaro Minami, Takeru Miyato, Yarin Gal

In contrastive representation learning, data representation is trained so that it can classify the image instances even when the images are altered by augmentations.

Representation Learning

Robustness to Adversarial Perturbations in Learning from Incomplete Data

no code implementations NeurIPS 2019 Amir Najafi, Shin-ichi Maeda, Masanori Koyama, Takeru Miyato

What is the role of unlabeled data in an inference problem, when the presumed underlying distribution is adversarially perturbed?

Spatially Controllable Image Synthesis with Internal Representation Collaging

1 code implementation26 Nov 2018 Ryohei Suzuki, Masanori Koyama, Takeru Miyato, Taizan Yonetsuji, Huachun Zhu

We present a novel CNN-based image editing strategy that allows the user to change the semantic information of an image over an arbitrary region by manipulating the feature-space representation of the image in a trained GAN model.

Image Generation

Neural Multi-scale Image Compression

no code implementations16 May 2018 Ken Nakanishi, Shin-ichi Maeda, Takeru Miyato, Daisuke Okanohara

This study presents a new lossy image compression method that utilizes the multi-scale features of natural images.

Image Compression

cGANs with Projection Discriminator

12 code implementations ICLR 2018 Takeru Miyato, Masanori Koyama

We propose a novel, projection based way to incorporate the conditional information into the discriminator of GANs that respects the role of the conditional information in the underlining probabilistic model.

Conditional Image Generation Super-Resolution

Parameter Reference Loss for Unsupervised Domain Adaptation

no code implementations20 Nov 2017 Jiren Jin, Richard G. Calland, Takeru Miyato, Brian K. Vogel, Hideki Nakayama

Unsupervised domain adaptation (UDA) aims to utilize labeled data from a source domain to learn a model that generalizes to a target domain of unlabeled data.

Model Selection Unsupervised Domain Adaptation

Spectral Norm Regularization for Improving the Generalizability of Deep Learning

no code implementations31 May 2017 Yuichi Yoshida, Takeru Miyato

We investigate the generalizability of deep learning based on the sensitivity to input perturbation.

Deep Learning

Learning Discrete Representations via Information Maximizing Self-Augmented Training

2 code implementations ICML 2017 Weihua Hu, Takeru Miyato, Seiya Tokui, Eiichi Matsumoto, Masashi Sugiyama

Learning discrete representations of data is a central machine learning task because of the compactness of the representations and ease of interpretation.

Ranked #3 on Unsupervised Image Classification on SVHN (using extra training data)

Clustering Data Augmentation +1

Adversarial Training Methods for Semi-Supervised Text Classification

4 code implementations25 May 2016 Takeru Miyato, Andrew M. Dai, Ian Goodfellow

We extend adversarial and virtual adversarial training to the text domain by applying perturbations to the word embeddings in a recurrent neural network rather than to the original input itself.

General Classification Semi-Supervised Text Classification +2

Distributional Smoothing with Virtual Adversarial Training

5 code implementations2 Jul 2015 Takeru Miyato, Shin-ichi Maeda, Masanori Koyama, Ken Nakae, Shin Ishii

We propose local distributional smoothness (LDS), a new notion of smoothness for statistical model that can be used as a regularization term to promote the smoothness of the model distribution.

Cannot find the paper you are looking for? You can Submit a new open access paper.