Search Results for author: Ryota Tomioka

Found 30 papers, 10 papers with code

DistIR: An Intermediate Representation and Simulator for Efficient Neural Network Distribution

no code implementations9 Nov 2021 Keshav Santhanam, Siddharth Krishna, Ryota Tomioka, Tim Harris, Matei Zaharia

The rapidly growing size of deep neural network (DNN) models and datasets has given rise to a variety of distribution strategies such as data, tensor-model, pipeline parallelism, and hybrid combinations thereof.

Regularized Policies are Reward Robust

no code implementations18 Jan 2021 Hisham Husain, Kamil Ciosek, Ryota Tomioka

Entropic regularization of policies in Reinforcement Learning (RL) is a commonly used heuristic to ensure that the learned policy explores the state-space sufficiently before overfitting to a local optimal policy.


On Certifying Non-uniform Bound against Adversarial Attacks

no code implementations15 Mar 2019 Chen Liu, Ryota Tomioka, Volkan Cevher

This work studies the robustness certification problem of neural network models, which aims to find certified adversary-free regions as large as possible around data points.

Continuous Hierarchical Representations with Poincaré Variational Auto-Encoders

3 code implementations NeurIPS 2019 Emile Mathieu, Charline Le Lan, Chris J. Maddison, Ryota Tomioka, Yee Whye Teh

We therefore endow VAEs with a Poincar\'e ball model of hyperbolic geometry as a latent space and rigorously derive the necessary methods to work with two main Gaussian generalisations on that space.

Depth and nonlinearity induce implicit exploration for RL

no code implementations29 May 2018 Justas Dauparas, Ryota Tomioka, Katja Hofmann

The question of how to explore, i. e., take actions with uncertain outcomes to learn about possible future rewards, is a key question in reinforcement learning (RL).

Q-Learning reinforcement-learning

The Mutual Autoencoder: Controlling Information in Latent Code Representations

no code implementations ICLR 2018 Mary Phuong, Max Welling, Nate Kushman, Ryota Tomioka, Sebastian Nowozin

Thus, we decouple the choice of decoder capacity and the latent code dimensionality from the amount of information stored in the code.

Representation Learning

AMPNet: Asynchronous Model-Parallel Training for Dynamic Neural Networks

1 code implementation ICLR 2018 Alexander L. Gaunt, Matthew A. Johnson, Maik Riechert, Daniel Tarlow, Ryota Tomioka, Dimitrios Vytiniotis, Sam Webster

Through an implementation on multi-core CPUs, we show that AMP training converges to the same accuracy as conventional synchronous training algorithms in a similar number of epochs, but utilizes the available hardware more efficiently even for small minibatch sizes, resulting in significantly shorter overall training times.

Multi-Level Variational Autoencoder: Learning Disentangled Representations from Grouped Observations

2 code implementations24 May 2017 Diane Bouchacourt, Ryota Tomioka, Sebastian Nowozin

We would like to learn a representation of the data which decomposes an observation into factors of variation which we can independently control.


Geometry of Optimization and Implicit Regularization in Deep Learning

1 code implementation8 May 2017 Behnam Neyshabur, Ryota Tomioka, Ruslan Salakhutdinov, Nathan Srebro

We argue that the optimization plays a crucial role in generalization of deep learning models through implicit regularization.

Batch Policy Gradient Methods for Improving Neural Conversation Models

no code implementations10 Feb 2017 Kirthevasan Kandasamy, Yoram Bachrach, Ryota Tomioka, Daniel Tarlow, David Carter

We study reinforcement learning of chatbots with recurrent neural network architectures when the rewards are noisy and expensive to obtain.

Chatbot Natural Language Processing +2

QSGD: Communication-Efficient SGD via Gradient Quantization and Encoding

1 code implementation NeurIPS 2017 Dan Alistarh, Demjan Grubic, Jerry Li, Ryota Tomioka, Milan Vojnovic

In this paper, we propose Quantized SGD (QSGD), a family of compression schemes which allow the compression of gradient updates at each node, while guaranteeing convergence under standard assumptions.

Image Classification Quantization +2

f-GAN: Training Generative Neural Samplers using Variational Divergence Minimization

1 code implementation NeurIPS 2016 Sebastian Nowozin, Botond Cseke, Ryota Tomioka

Generative neural samplers are probabilistic models that implement sampling using feedforward neural networks: they take a random input vector and produce a sample from a probability distribution defined by the network weights.

Condition for Perfect Dimensionality Recovery by Variational Bayesian PCA

1 code implementation15 Dec 2015 Shinichi Nakajima, Ryota Tomioka, Masashi Sugiyama, S. Derin Babacan

In this paper, we clarify the behavior of VB learning in probabilistic PCA (or fully-observed matrix factorization).

Data-Dependent Path Normalization in Neural Networks

no code implementations20 Nov 2015 Behnam Neyshabur, Ryota Tomioka, Ruslan Salakhutdinov, Nathan Srebro

We propose a unified framework for neural net normalization, regularization and optimization, which includes Path-SGD and Batch-Normalization and interpolates between them across two different dimensions.

Jointly Learning Multiple Measures of Similarities from Triplet Comparisons

no code implementations5 Mar 2015 Liwen Zhang, Subhransu Maji, Ryota Tomioka

Similarity between objects is multi-faceted and it can be easier for human annotators to measure it when the focus is on a specific aspect.

Metric Learning

Norm-Based Capacity Control in Neural Networks

no code implementations27 Feb 2015 Behnam Neyshabur, Ryota Tomioka, Nathan Srebro

We investigate the capacity, convexity and characterization of a general family of norm-constrained feed-forward networks.

In Search of the Real Inductive Bias: On the Role of Implicit Regularization in Deep Learning

no code implementations20 Dec 2014 Behnam Neyshabur, Ryota Tomioka, Nathan Srebro

We present experiments demonstrating that some other form of capacity control, different from network size, plays a central role in learning multilayer feed-forward networks.

Inductive Bias

Multitask learning meets tensor factorization: task imputation via convex optimization

no code implementations NeurIPS 2014 Kishan Wimalawarne, Masashi Sugiyama, Ryota Tomioka

We study a multitask learning problem in which each task is parametrized by a weight vector and indexed by a pair of indices, which can be e. g, (consumer, time).


Spectral norm of random tensors

no code implementations7 Jul 2014 Ryota Tomioka, Taiji Suzuki

We show that the spectral norm of a random $n_1\times n_2\times \cdots \times n_K$ tensor (or higher-order array) scales as $O\left(\sqrt{(\sum_{k=1}^{K}n_k)\log(K)}\right)$ under some sub-Gaussian assumption on the entries.

Convex Tensor Decomposition via Structured Schatten Norm Regularization

no code implementations NeurIPS 2013 Ryota Tomioka, Taiji Suzuki

We discuss structured Schatten norms for tensor decomposition that includes two recently proposed norms ("overlapped" and "latent") for convex-optimization-based tensor decomposition, and connect tensor decomposition with wider literature on structured sparsity.

Tensor Decomposition

Perfect Dimensionality Recovery by Variational Bayesian PCA

no code implementations NeurIPS 2012 Shinichi Nakajima, Ryota Tomioka, Masashi Sugiyama, S. D. Babacan

The variational Bayesian (VB) approach is one of the best tractable approximations to the Bayesian estimation, and it was demonstrated to perform well in many applications.

The Algebraic Combinatorial Approach for Low-Rank Matrix Completion

no code implementations17 Nov 2012 Franz J. Király, Louis Theran, Ryota Tomioka

We present a novel algebraic combinatorial view on low-rank matrix completion based on studying relations between a few entries with tools from algebraic geometry and matroid theory.

Low-Rank Matrix Completion

Global Analytic Solution for Variational Bayesian Matrix Factorization

no code implementations NeurIPS 2010 Shinichi Nakajima, Masashi Sugiyama, Ryota Tomioka

Bayesian methods of matrix factorization (MF) have been actively explored recently as promising alternatives to classical singular value decomposition.

Cannot find the paper you are looking for? You can Submit a new open access paper.