Search Results for author: Yutaka Matsuo

Found 49 papers, 17 papers with code

Learning shared manifold representation of images and attributes for generalized zero-shot learning

no code implementations ICLR 2019 Masahiro Suzuki, Yusuke Iwasawa, Yutaka Matsuo

To solve this, we propose a concept to learn a mapping that embeds both images and attributes to the shared representation space that can be generalized even for unseen classes by interpolating from the information of seen classes, which we refer to shared manifold learning.

Generalized Zero-Shot Learning

Pixyz: a library for developing deep generative models

no code implementations28 Jul 2021 Masahiro Suzuki, Takaaki Kaneko, Yutaka Matsuo

With the recent rapid progress in the study of deep generative models (DGMs), there is a need for a framework that can implement them in a simple and generic way.

Estimating Disentangled Belief about Hidden State and Hidden Task for Meta-RL

no code implementations14 May 2021 Kei Akuzawa, Yusuke Iwasawa, Yutaka Matsuo

Therefore, the meta-RL agent faces the challenge of specifying both the hidden task and states based on small amount of experience.

Meta Reinforcement Learning

Co-Adaptation of Algorithmic and Implementational Innovations in Inference-based Deep Reinforcement Learning

1 code implementation31 Mar 2021 Hiroki Furuta, Tadashi Kozuno, Tatsuya Matsushima, Yutaka Matsuo, Shixiang Shane Gu

These results show which implementation details are co-adapted and co-evolved with algorithms, and which are transferable across algorithms: as examples, we identified that tanh Gaussian policy and network sizes are highly adapted to algorithmic types, while layer normalization and ELU are critical for MPO's performances but also transfer to noticeable gains in SAC.

Group Equivariant Conditional Neural Processes

no code implementations ICLR 2021 Makoto Kawano, Wataru Kumagai, Akiyoshi Sannai, Yusuke Iwasawa, Yutaka Matsuo

We present the group equivariant conditional neural process (EquivCNP), a meta-learning method with permutation invariance in a data set as in conventional conditional neural processes (CNPs), and it also has transformation equivariance in data space.

Meta-Learning

$q$-Deformation of Corner Vertex Operator Algebras by Miura Transformation

no code implementations11 Jan 2021 Koichi Harada, Yutaka Matsuo, Go Noshita, Akimi Watanabe

It gives the free field representation for $q$-deformed $Y_{L, M, N}$, which is obtained as a reduction of the quantum toroidal algebra.

High Energy Physics - Theory Mathematical Physics Mathematical Physics Quantum Algebra

Wheelchair Behavior Recognition for Visualizing Sidewalk Accessibility by Deep Neural Networks

no code implementations11 Jan 2021 Takumi Watanabe, Hiroki Takahashi, Goh Sato, Yusuke Iwasawa, Yutaka Matsuo, Ikuko Eguchi Yairi

This paper introduces our methodology to estimate sidewalk accessibilities from wheelchair behavior via a triaxial accelerometer in a smartphone installed under a wheelchair seat.

Subformer: A Parameter Reduced Transformer

no code implementations1 Jan 2021 Machel Reid, Edison Marrese-Taylor, Yutaka Matsuo

We also perform equally well as Transformer-big with 40% less parameters and outperform the model by 0. 7 BLEU with 12M less parameters.

Abstractive Text Summarization Language Modelling +1

Learning Deep Latent Variable Models via Amortized Langevin Dynamics

no code implementations1 Jan 2021 Shohei Taniguchi, Yusuke Iwasawa, Yutaka Matsuo

Developing a latent variable model and an inference model with neural networks, yields Langevin autoencoders (LAEs), a novel Langevin-based framework for deep generative models.

Latent Variable Models Unsupervised Anomaly Detection

Information Theoretic Regularization for Learning Global Features by Sequential VAE

no code implementations1 Jan 2021 Kei Akuzawa, Yusuke Iwasawa, Yutaka Matsuo

However, by analyzing the sequential VAEs from the information theoretic perspective, we can claim that simply maximizing the MI encourages the latent variables to have redundant information and prevents the disentanglement of global and local features.

Iterative Image Inpainting with Structural Similarity Mask for Anomaly Detection

no code implementations1 Jan 2021 Hitoshi Nakanishi, Masahiro Suzuki, Yutaka Matsuo

Moreover, there is objective mismatching that models are trained to minimize total reconstruction errors while we expect a small deviation on normal pixels and large deviation on anomalous pixels.

Image Inpainting Unsupervised Anomaly Detection

Epipolar-Guided Deep Object Matching for Scene Change Detection

no code implementations30 Jul 2020 Kento Doi, Ryuhei Hamaguchi, Shun Iwase, Rio Yokota, Yutaka Matsuo, Ken Sakurada

To cope with the difficulty, we introduce a deep graph matching network that establishes object correspondence between an image pair.

Graph Matching Scene Change Detection

Deployment-Efficient Reinforcement Learning via Model-Based Offline Optimization

1 code implementation ICLR 2021 Tatsuya Matsushima, Hiroki Furuta, Yutaka Matsuo, Ofir Nachum, Shixiang Gu

We propose a novel model-based algorithm, Behavior-Regularized Model-ENsemble (BREMEN) that can effectively optimize a policy offline using 10-20 times fewer data than prior works.

Offline RL

A Multi-modal Approach to Fine-grained Opinion Mining on Video Reviews

no code implementations WS 2020 Edison Marrese-Taylor, Cristian Rodriguez-Opazo, Jorge A. Balazs, Stephen Gould, Yutaka Matsuo

Despite the recent advances in opinion mining for written reviews, few works have tackled the problem on other sources of reviews.

Opinion Mining

Variational Inference for Learning Representations of Natural Language Edits

1 code implementation20 Apr 2020 Edison Marrese-Taylor, Machel Reid, Yutaka Matsuo

Document editing has become a pervasive component of the production of information, with version control systems enabling edits to be efficiently stored and applied.

Variational Inference

Combining Pretrained High-Resource Embeddings and Subword Representations for Low-Resource Languages

no code implementations9 Mar 2020 Machel Reid, Edison Marrese-Taylor, Yutaka Matsuo

The contrast between the need for large amounts of data for current Natural Language Processing (NLP) techniques, and the lack thereof, is accentuated in the case of African languages, most of which are considered low-resource.

Word Embeddings

Out-of-Distribution Detection Using Layerwise Uncertainty in Deep Neural Networks

no code implementations ICLR 2020 Hirono Okamoto, Masahiro Suzuki, Yutaka Matsuo

However, on difficult datasets or models with low classification ability, these methods incorrectly regard in-distribution samples close to the decision boundary as OOD samples.

Classification General Classification +1

Graph-based Knowledge Tracing: Modeling Student Proficiency Using Graph Neural Network

1 code implementation ACM 2019 Hiromi Nakagawa, Yusuke Iwasawa, Yutaka Matsuo

Inspired by the recent successes of the graph neural network (GNN), we herein propose a GNN-based knowledge tracing method, i. e., graph-based knowledge tracing.

Knowledge Tracing Time Series

An Edit-centric Approach for Wikipedia Article Quality Assessment

no code implementations WS 2019 Edison Marrese-Taylor, Pablo Loyola, Yutaka Matsuo

We propose an edit-centric approach to assess Wikipedia article quality as a complementary alternative to current full document-based techniques.

Variational Domain Adaptation

no code implementations ICLR 2019 Hirono Okamoto, Shohei Ohsawa, Itto Higuchi, Haruka Murakami, Mizuki Sango, Zhenghang Cui, Masahiro Suzuki, Hiroshi Kajino, Yutaka Matsuo

It reformulates the posterior with a natural paring $\langle, \rangle: \mathcal{Z} \times \mathcal{Z}^* \rightarrow \Real$, which can be expanded to uncountable infinite domains such as continuous domains as well as interpolation.

Bayesian Inference Domain Adaptation +2

Adversarial Invariant Feature Learning with Accuracy Constraint for Domain Generalization

no code implementations29 Apr 2019 Kei Akuzawa, Yusuke Iwasawa, Yutaka Matsuo

However, previous domain-invariance-based methods overlooked the underlying dependency of classes on domains, which is responsible for the trade-off between classification accuracy and domain invariance.

Domain Generalization

Gating Mechanisms for Combining Character and Word-level Word Representations: An Empirical Study

1 code implementation NAACL 2019 Jorge A. Balazs, Yutaka Matsuo

In this paper we study how different ways of combining character and word-level representations affect the quality of both final word and sentence representations.

Semantic Similarity Semantic Textual Similarity +1

Content Aware Source Code Change Description Generation

no code implementations WS 2018 Pablo Loyola, Edison Marrese-Taylor, Jorge Balazs, Yutaka Matsuo, Fumiko Satoh

We propose to study the generation of descriptions from source code changes by integrating the messages included on code commits and the intra-code documentation inside the source in the form of docstrings.

Machine Translation Text Generation

Deep contextualized word representations for detecting sarcasm and irony

1 code implementation WS 2018 Suzana Ilić, Edison Marrese-Taylor, Jorge A. Balazs, Yutaka Matsuo

Predicting context-dependent and non-literal utterances like sarcastic and ironic expressions still remains a challenging task in NLP, as it goes beyond linguistic patterns, encompassing common sense and shared knowledge as crucial components.

Common Sense Reasoning

IIIDYT at IEST 2018: Implicit Emotion Classification With Deep Contextualized Word Representations

1 code implementation WS 2018 Jorge A. Balazs, Edison Marrese-Taylor, Yutaka Matsuo

In this paper we describe our system designed for the WASSA 2018 Implicit Emotion Shared Task (IEST), which obtained 2$^{\text{nd}}$ place out of 26 teams with a test macro F1 score of $0. 710$.

Emotion Classification General Classification

Learning to Automatically Generate Fill-In-The-Blank Quizzes

no code implementations WS 2018 Edison Marrese-Taylor, Ai Nakajima, Yutaka Matsuo, Ono Yuichi

In this paper we formalize the problem automatic fill-in-the-blank question generation using two standard NLP machine learning schemes, proposing concrete deep learning models for each.

Question Generation

Expressive Speech Synthesis via Modeling Expressions with Variational Autoencoder

no code implementations6 Apr 2018 Kei Akuzawa, Yusuke Iwasawa, Yutaka Matsuo

Recent advances in neural autoregressive models have improve the performance of speech synthesis (SS).

Expressive Speech Synthesis

Improving Bi-directional Generation between Different Modalities with Variational Autoencoders

no code implementations26 Jan 2018 Masahiro Suzuki, Kotaro Nakayama, Yutaka Matsuo

However, we found that when this model attempts to generate a large dimensional modality missing at the input, the joint representation collapses and this modality cannot be generated successfully.

Censoring Representations with Multiple-Adversaries over Random Subspaces

no code implementations ICLR 2018 Yusuke Iwasawa, Kotaro Nakayama, Yutaka Matsuo

AFL learn such a representations by training the networks to deceive the adversary that predict the sensitive information from the network, and therefore, the success of the AFL heavily relies on the choice of the adversary.

Neuron as an Agent

no code implementations ICLR 2018 Shohei Ohsawa, Kei Akuzawa, Tatsuya Matsushima, Gustavo Bezerra, Yusuke Iwasawa, Hiroshi Kajino, Seiya Takenaka, Yutaka Matsuo

Existing multi-agent reinforcement learning (MARL) communication methods have relied on a trusted third party (TTP) to distribute reward to agents, leaving them inapplicable in peer-to-peer environments.

Multi-agent Reinforcement Learning OpenAI Gym

EmoAtt at EmoInt-2017: Inner attention sentence embedding for Emotion Intensity

1 code implementation WS 2017 Edison Marrese-Taylor, Yutaka Matsuo

In this paper we describe a deep learning system that has been designed and built for the WASSA 2017 Emotion Intensity Shared Task.

Sentence Embedding

Mining fine-grained opinions on closed captions of YouTube videos with an attention-RNN

1 code implementation WS 2017 Edison Marrese-Taylor, Jorge A. Balazs, Yutaka Matsuo

These results, as well as further experiments on domain adaptation for aspect extraction, suggest that differences between speech and written text, which have been discussed extensively in the literature, also extend to the domain of product reviews, where they are relevant for fine-grained opinion mining.

Aspect Extraction Domain Adaptation +2

Okutama-Action: An Aerial View Video Dataset for Concurrent Human Action Detection

no code implementations9 Jun 2017 Mohammadamin Barekatain, Miquel Martí, Hsueh-Fu Shih, Samuel Murray, Kotaro Nakayama, Yutaka Matsuo, Helmut Prendinger

Despite significant progress in the development of human action detection datasets and algorithms, no current dataset is representative of real-world aerial view scenarios.

Action Detection

A Neural Architecture for Generating Natural Language Descriptions from Source Code Changes

1 code implementation ACL 2017 Pablo Loyola, Edison Marrese-Taylor, Yutaka Matsuo

We propose a model to automatically describe changes introduced in the source code of a program using natural language.

Neural Machine Translation with Latent Semantic of Image and Text

no code implementations25 Nov 2016 Joji Toyama, Masanori Misono, Masahiro Suzuki, Kotaro Nakayama, Yutaka Matsuo

The report of earlier studies has introduced a latent variable to capture the entire meaning of sentence and achieved improvement on attention-based Neural Machine Translation.

Machine Translation

Joint Multimodal Learning with Deep Generative Models

1 code implementation7 Nov 2016 Masahiro Suzuki, Kotaro Nakayama, Yutaka Matsuo

As described herein, we propose a joint multimodal variational autoencoder (JMVAE), in which all modalities are independently conditioned on joint representation.

Understanding Rating Behaviour and Predicting Ratings by Identifying Representative Users

no code implementations PACLIC 2015 Rahul Kamath, Masanao Ochi, Yutaka Matsuo

While previous approaches to obtaining product ratings require either a large number of user ratings or a few review texts, we show that it is possible to predict ratings with few user ratings and no review text.

Cannot find the paper you are looking for? You can Submit a new open access paper.