Search Results for author: Seiya Tokui

Found 7 papers, 3 papers with code

Disentanglement Analysis with Partial Information Decomposition

no code implementations ICLR 2022 Seiya Tokui, Issei Sato

We propose a framework to analyze how multivariate representations disentangle ground-truth generative factors.

Disentanglement

Adversarial Attacks and Defences Competition

1 code implementation31 Mar 2018 Alexey Kurakin, Ian Goodfellow, Samy Bengio, Yinpeng Dong, Fangzhou Liao, Ming Liang, Tianyu Pang, Jun Zhu, Xiaolin Hu, Cihang Xie, Jian-Yu Wang, Zhishuai Zhang, Zhou Ren, Alan Yuille, Sangxia Huang, Yao Zhao, Yuzhe Zhao, Zhonglin Han, Junjiajia Long, Yerkebulan Berdibekov, Takuya Akiba, Seiya Tokui, Motoki Abe

To accelerate research on adversarial examples and robustness of machine learning classifiers, Google Brain organized a NIPS 2017 competition that encouraged researchers to develop new methods to generate adversarial examples as well as to develop new ways to defend against them.

BIG-bench Machine Learning

Evaluating the Variance of Likelihood-Ratio Gradient Estimators

no code implementations ICML 2017 Seiya Tokui, Issei Sato

The framework gives a natural derivation of the optimal estimator that can be interpreted as a special case of the likelihood-ratio method so that we can evaluate the optimal degree of practical techniques with it.

Learning Discrete Representations via Information Maximizing Self-Augmented Training

2 code implementations ICML 2017 Weihua Hu, Takeru Miyato, Seiya Tokui, Eiichi Matsumoto, Masashi Sugiyama

Learning discrete representations of data is a central machine learning task because of the compactness of the representations and ease of interpretation.

Ranked #3 on Unsupervised Image Classification on SVHN (using extra training data)

Clustering Data Augmentation +1

Reparameterization trick for discrete variables

no code implementations4 Nov 2016 Seiya Tokui, Issei Sato

Low-variance gradient estimation is crucial for learning directed graphical models parameterized by neural networks, where the reparameterization trick is widely used for those with continuous variables.

Cannot find the paper you are looking for? You can Submit a new open access paper.