Search Results for author: Yusuke Iwasawa

Found 21 papers, 2 papers with code

Learning shared manifold representation of images and attributes for generalized zero-shot learning

no code implementations ICLR 2019 Masahiro Suzuki, Yusuke Iwasawa, Yutaka Matsuo

To solve this, we propose a concept to learn a mapping that embeds both images and attributes to the shared representation space that can be generalized even for unseen classes by interpolating from the information of seen classes, which we refer to shared manifold learning.

Generalized Zero-Shot Learning

Test-Time Classifier Adjustment Module for Model-Agnostic Domain Generalization

no code implementations NeurIPS 2021 Yusuke Iwasawa, Yutaka Matsuo

This paper presents a new algorithm for domain generalization (DG), \textit{test-time template adjuster (T3A)}, aiming to robustify a model to unknown distribution shift.

Domain Generalization Stochastic Optimization

Amortized Prompt: Lightweight Fine-Tuning for CLIP in Domain Generalization

1 code implementation25 Nov 2021 Xin Zhang, Yusuke Iwasawa, Yutaka Matsuo, Shixiang Shane Gu

For the latter, we propose AP (Amortized Prompt), as a novel approach for domain inference in the form of prompt generation.

Domain Generalization Fine-tuning +3

Estimating Disentangled Belief about Hidden State and Hidden Task for Meta-RL

no code implementations14 May 2021 Kei Akuzawa, Yusuke Iwasawa, Yutaka Matsuo

Therefore, the meta-RL agent faces the challenge of specifying both the hidden task and states based on small amount of experience.

Meta Reinforcement Learning

Group Equivariant Conditional Neural Processes

no code implementations ICLR 2021 Makoto Kawano, Wataru Kumagai, Akiyoshi Sannai, Yusuke Iwasawa, Yutaka Matsuo

We present the group equivariant conditional neural process (EquivCNP), a meta-learning method with permutation invariance in a data set as in conventional conditional neural processes (CNPs), and it also has transformation equivariance in data space.

Meta-Learning Translation

Wheelchair Behavior Recognition for Visualizing Sidewalk Accessibility by Deep Neural Networks

no code implementations11 Jan 2021 Takumi Watanabe, Hiroki Takahashi, Goh Sato, Yusuke Iwasawa, Yutaka Matsuo, Ikuko Eguchi Yairi

This paper introduces our methodology to estimate sidewalk accessibilities from wheelchair behavior via a triaxial accelerometer in a smartphone installed under a wheelchair seat.

Information Theoretic Regularization for Learning Global Features by Sequential VAE

no code implementations1 Jan 2021 Kei Akuzawa, Yusuke Iwasawa, Yutaka Matsuo

However, by analyzing the sequential VAEs from the information theoretic perspective, we can claim that simply maximizing the MI encourages the latent variables to have redundant information and prevents the disentanglement of global and local features.

Learning Deep Latent Variable Models via Amortized Langevin Dynamics

no code implementations1 Jan 2021 Shohei Taniguchi, Yusuke Iwasawa, Yutaka Matsuo

Developing a latent variable model and an inference model with neural networks, yields Langevin autoencoders (LAEs), a novel Langevin-based framework for deep generative models.

Latent Variable Models Unsupervised Anomaly Detection

Graph-based Knowledge Tracing: Modeling Student Proficiency Using Graph Neural Network

1 code implementation ACM 2019 Hiromi Nakagawa, Yusuke Iwasawa, Yutaka Matsuo

Inspired by the recent successes of the graph neural network (GNN), we herein propose a GNN-based knowledge tracing method, i. e., graph-based knowledge tracing.

Knowledge Tracing Time Series

Stablizing Adversarial Invariance Induction by Discriminator Matching

no code implementations25 Sep 2019 Yusuke Iwasawa, Kei Akuzawa, Yutaka Matsuo

An adversarial invariance induction (AII) shows its power on this purpose, which maximizes the proxy of the conditional entropy between representations and attributes by adversarial training between an attribute discriminator and feature extractor.

Domain Generalization Fairness +1

Adversarial Invariant Feature Learning with Accuracy Constraint for Domain Generalization

no code implementations29 Apr 2019 Kei Akuzawa, Yusuke Iwasawa, Yutaka Matsuo

However, previous domain-invariance-based methods overlooked the underlying dependency of classes on domains, which is responsible for the trade-off between classification accuracy and domain invariance.

Domain Generalization

Invariant Feature Learning by Attribute Perception Matching

no code implementations ICLR Workshop LLD 2019 Yusuke Iwasawa, Kei Akuzawa, Yutaka Matsuo

An adversarial feature learning (AFL) is a powerful framework to learn representations invariant to a nuisance attribute, which uses an adversarial game between a feature extractor and a categorical attribute classifier.

Domain Generalization via Invariant Representation under Domain-Class Dependency

no code implementations27 Sep 2018 Kei Akuzawa, Yusuke Iwasawa, Yutaka Matsuo

Learning domain-invariant representation is a dominant approach for domain generalization, where we need to build a classifier that is robust toward domain shifts induced by change of users, acoustic or lighting conditions, etc.

Domain Generalization

Expressive Speech Synthesis via Modeling Expressions with Variational Autoencoder

no code implementations6 Apr 2018 Kei Akuzawa, Yusuke Iwasawa, Yutaka Matsuo

Recent advances in neural autoregressive models have improve the performance of speech synthesis (SS).

Expressive Speech Synthesis

Neuron as an Agent

no code implementations ICLR 2018 Shohei Ohsawa, Kei Akuzawa, Tatsuya Matsushima, Gustavo Bezerra, Yusuke Iwasawa, Hiroshi Kajino, Seiya Takenaka, Yutaka Matsuo

Existing multi-agent reinforcement learning (MARL) communication methods have relied on a trusted third party (TTP) to distribute reward to agents, leaving them inapplicable in peer-to-peer environments.

Multi-agent Reinforcement Learning OpenAI Gym

Censoring Representations with Multiple-Adversaries over Random Subspaces

no code implementations ICLR 2018 Yusuke Iwasawa, Kotaro Nakayama, Yutaka Matsuo

AFL learn such a representations by training the networks to deceive the adversary that predict the sensitive information from the network, and therefore, the success of the AFL heavily relies on the choice of the adversary.

Cannot find the paper you are looking for? You can Submit a new open access paper.