Search Results for author: Yutaka Matsuo

Found 95 papers, 34 papers with code

Learning shared manifold representation of images and attributes for generalized zero-shot learning

no code implementations ICLR 2019 Masahiro Suzuki, Yusuke Iwasawa, Yutaka Matsuo

To solve this, we propose a concept to learn a mapping that embeds both images and attributes to the shared representation space that can be generalized even for unseen classes by interpolating from the information of seen classes, which we refer to shared manifold learning.

Generalized Zero-Shot Learning

On the Impact of Data Augmentation on Downstream Performance in Natural Language Processing

no code implementations insights (ACL) 2022 Itsuki Okimura, Machel Reid, Makoto Kawano, Yutaka Matsuo

The reason for this is that within NLP, the impact of proposed data augmentation methods on performance has not been evaluated in a unified manner, and effective data augmentation methods are unclear.

BIG-bench Machine Learning Data Augmentation

Low-Resource Machine Translation Using Cross-Lingual Language Model Pretraining

no code implementations NAACL (AmericasNLP) 2021 Francis Zheng, Machel Reid, Edison Marrese-Taylor, Yutaka Matsuo

This paper describes UTokyo’s submission to the AmericasNLP 2021 Shared Task on machine translation systems for indigenous languages of the Americas.

Language Modelling Machine Translation +1

Interpreting Grokked Transformers in Complex Modular Arithmetic

1 code implementation26 Feb 2024 Hiroki Furuta, Gouki Minegishi, Yusuke Iwasawa, Yutaka Matsuo

Grokking has been actively explored to reveal the mystery of delayed generalization.

A Policy Gradient Primal-Dual Algorithm for Constrained MDPs with Uniform PAC Guarantees

1 code implementation31 Jan 2024 Toshinori Kitamura, Tadashi Kozuno, Masahiro Kato, Yuki Ichihara, Soichiro Nishimori, Akiyoshi Sannai, Sho Sonoda, Wataru Kumagai, Yutaka Matsuo

We study a primal-dual reinforcement learning (RL) algorithm for the online constrained Markov decision processes (CMDP) problem, wherein the agent explores an optimal policy that maximizes return while satisfying constraints.

Reinforcement Learning (RL)

Exposing Limitations of Language Model Agents in Sequential-Task Compositions on the Web

1 code implementation30 Nov 2023 Hiroki Furuta, Yutaka Matsuo, Aleksandra Faust, Izzeddin Gur

We show that while existing prompted LMAs (gpt-3. 5-turbo or gpt-4) achieve 94. 0% average success rate on base tasks, their performance degrades to 24. 9% success rate on compositional tasks.

Decision Making Language Modelling

Unnatural Error Correction: GPT-4 Can Almost Perfectly Handle Unnatural Scrambled Text

1 code implementation30 Nov 2023 Qi Cao, Takeshi Kojima, Yutaka Matsuo, Yusuke Iwasawa

While Large Language Models (LLMs) have achieved remarkable performance in many tasks, much about their inner workings remains unclear.

Grokking Tickets: Lottery Tickets Accelerate Grokking

1 code implementation30 Oct 2023 Gouki Minegishi, Yusuke Iwasawa, Yutaka Matsuo

We aim to analyze the mechanism of grokking from the lottery ticket hypothesis, identifying the process to find the lottery tickets (good sparse subnetworks) as the key to describing the transitional phase between memorization and generalization.

Image Classification Memorization

Target-Aware Contextual Political Bias Detection in News

no code implementations2 Oct 2023 Iffat Maab, Edison Marrese-Taylor, Yutaka Matsuo

Sentence-level political bias detection in news is no exception, and has proven to be a challenging task that requires an understanding of bias in consideration of the context.

Bias Detection Data Augmentation +1

Suspicion-Agent: Playing Imperfect Information Games with Theory of Mind Aware GPT-4

1 code implementation29 Sep 2023 Jiaxian Guo, Bo Yang, Paul Yoo, Bill Yuchen Lin, Yusuke Iwasawa, Yutaka Matsuo

Unlike perfect information games, where all elements are known to every player, imperfect information games emulate the real-world complexities of decision-making under uncertain or incomplete information.

Card Games Decision Making +1

GenDOM: Generalizable One-shot Deformable Object Manipulation with Parameter-Aware Policy

no code implementations16 Sep 2023 So Kuroki, Jiaxian Guo, Tatsuya Matsushima, Takuya Okubo, Masato Kobayashi, Yuya Ikeda, Ryosuke Takanami, Paul Yoo, Yutaka Matsuo, Yusuke Iwasawa

Due to the inherent uncertainty in their deformability during motion, previous methods in deformable object manipulation, such as rope and cloth, often required hundreds of real-world demonstrations to train a manipulation policy for each object, which hinders their applications in our ever-changing world.

Deformable Object Manipulation Object

A Real-World WebAgent with Planning, Long Context Understanding, and Program Synthesis

no code implementations24 Jul 2023 Izzeddin Gur, Hiroki Furuta, Austin Huang, Mustafa Safdari, Yutaka Matsuo, Douglas Eck, Aleksandra Faust

Pre-trained large language models (LLMs) have recently achieved better generalization and sample efficiency in autonomous web automation.

 Ranked #1 on on Mind2Web

Code Generation Denoising +2

GenORM: Generalizable One-shot Rope Manipulation with Parameter-Aware Policy

no code implementations14 Jun 2023 So Kuroki, Jiaxian Guo, Tatsuya Matsushima, Takuya Okubo, Masato Kobayashi, Yuya Ikeda, Ryosuke Takanami, Paul Yoo, Yutaka Matsuo, Yusuke Iwasawa

To achieve this, we augment the policy by conditioning it on deformable rope parameters and training it with a diverse range of simulated deformable ropes so that the policy can adjust actions based on different rope parameters.

Paste, Inpaint and Harmonize via Denoising: Subject-Driven Image Editing with Pre-Trained Diffusion Model

no code implementations13 Jun 2023 Xin Zhang, Jiaxian Guo, Paul Yoo, Yutaka Matsuo, Yusuke Iwasawa

To guarantee the visual coherence of the generated or edited image, we introduce an inpainting and harmonizing module to guide the pre-trained diffusion model to seamlessly blend the inserted subject into the scene naturally.

Denoising Image Generation +1

DreamSparse: Escaping from Plato's Cave with 2D Frozen Diffusion Model Given Sparse Views

no code implementations6 Jun 2023 Paul Yoo, Jiaxian Guo, Yutaka Matsuo, Shixiang Shane Gu

Leveraging the strong image priors in the pre-trained diffusion models, DreamSparse is capable of synthesizing high-quality novel views for both object and scene-level images and generalising to open-set images.

Image Generation

Multimodal Web Navigation with Instruction-Finetuned Foundation Models

no code implementations19 May 2023 Hiroki Furuta, Kuang-Huei Lee, Ofir Nachum, Yutaka Matsuo, Aleksandra Faust, Shixiang Shane Gu, Izzeddin Gur

The progress of autonomous web navigation has been hindered by the dependence on billions of exploratory interactions via online reinforcement learning, and domain-specific model designs that make it difficult to leverage generalization from rich out-of-domain data.

Autonomous Web Navigation Instruction Following +1

Multimodal Sequential Generative Models for Semi-Supervised Language Instruction Following

no code implementations29 Dec 2022 Kei Akuzawa, Yusuke Iwasawa, Yutaka Matsuo

This paper proposes using multimodal generative models for semi-supervised learning in the instruction following tasks.

Instruction Following

Realtime Fewshot Portrait Stylization Based On Geometric Alignment

no code implementations28 Nov 2022 Xinrui Wang, Zhuoru Li, Xiao Zhou, Yusuke Iwasawa, Yutaka Matsuo

Previous learning based stylization methods suffer from the geometric and semantic gaps between portrait domain and style domain, which obstacles the style information to be correctly transferred to the portrait images, leading to poor stylization quality.

Collective Intelligence for 2D Push Manipulations with Mobile Robots

1 code implementation28 Nov 2022 So Kuroki, Tatsuya Matsushima, Jumpei Arima, Hiroki Furuta, Yutaka Matsuo, Shixiang Shane Gu, Yujin Tang

While natural systems often present collective intelligence that allows them to self-organize and adapt to changes, the equivalent is missing in most artificial systems.

Robot Manipulation

A System for Morphology-Task Generalization via Unified Representation and Behavior Distillation

1 code implementation25 Nov 2022 Hiroki Furuta, Yusuke Iwasawa, Yutaka Matsuo, Shixiang Shane Gu

The rise of generalist large-scale models in natural language and vision has made us expect that a massive data-driven approach could achieve broader generalization in other domains such as continuous control.

Continuous Control Imitation Learning

Langevin Autoencoders for Learning Deep Latent Variable Models

1 code implementation15 Sep 2022 Shohei Taniguchi, Yusuke Iwasawa, Wataru Kumagai, Yutaka Matsuo

Based on the ALD, we also present a new deep latent variable model named the Langevin autoencoder (LAE).

Image Generation valid +1

Recognition of All Categories of Entities by AI

no code implementations13 Aug 2022 Hiroshi Yamakawa, Yutaka Matsuo

Human-level AI will have significant impacts on human society.

Philosophy

Deep Billboards towards Lossless Real2Sim in Virtual Reality

no code implementations8 Aug 2022 Naruya Kondo, So Kuroki, Ryosuke Hyakuta, Yutaka Matsuo, Shixiang Shane Gu, Yoichi Ochiai

An aspirational goal for virtual reality (VR) is to bring in a rich diversity of real world objects losslessly.

Neural Rendering

World Robot Challenge 2020 -- Partner Robot: A Data-Driven Approach for Room Tidying with Mobile Manipulator

no code implementations20 Jul 2022 Tatsuya Matsushima, Yuki Noguchi, Jumpei Arima, Toshiki Aoki, Yuki Okita, Yuya Ikeda, Koki Ishimoto, Shohei Taniguchi, Yuki Yamashita, Shoichi Seto, Shixiang Shane Gu, Yusuke Iwasawa, Yutaka Matsuo

Tidying up a household environment using a mobile manipulator poses various challenges in robotics, such as adaptation to large real-world environmental variations, and safe and robust deployment in the presence of humans. The Partner Robot Challenge in World Robot Challenge (WRC) 2020, a global competition held in September 2021, benchmarked tidying tasks in the real home environments, and importantly, tested for full system performances. For this challenge, we developed an entire household service robot system, which leverages a data-driven approach to adapt to numerous edge cases that occur during the execution, instead of classical manual pre-programmed solutions.

Motion Planning

A survey of multimodal deep generative models

no code implementations5 Jul 2022 Masahiro Suzuki, Yutaka Matsuo

In recent years, deep generative models, i. e., generative models in which distributions are parameterized by deep neural networks, have attracted much attention, especially variational autoencoders, which are suitable for accomplishing the above challenges because they can consider heterogeneity and infer good representations of data.

Large Language Models are Zero-Shot Reasoners

2 code implementations24 May 2022 Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, Yusuke Iwasawa

Pretrained large language models (LLMs) are widely used in many sub-fields of natural language processing (NLP) and generally known as excellent few-shot learners with task-specific exemplars.

Arithmetic Reasoning Common Sense Reasoning +4

Test-Time Classifier Adjustment Module for Model-Agnostic Domain Generalization

no code implementations NeurIPS 2021 Yusuke Iwasawa, Yutaka Matsuo

This paper presents a new algorithm for domain generalization (DG), \textit{test-time template adjuster (T3A)}, aiming to robustify a model to unknown distribution shift.

Domain Generalization Stochastic Optimization

VaxNeRF: Revisiting the Classic for Voxel-Accelerated Neural Radiance Field

1 code implementation25 Nov 2021 Naruya Kondo, Yuya Ikeda, Andrea Tagliasacchi, Yutaka Matsuo, Yoichi Ochiai, Shixiang Shane Gu

We hope VaxNeRF -- a careful combination of a classic technique with a deep method (that arguably replaced it) -- can empower and accelerate new NeRF extensions and applications, with its simplicity, portability, and reliable performance gains.

3D Reconstruction Meta-Learning

Domain Prompt Learning for Efficiently Adapting CLIP to Unseen Domains

1 code implementation25 Nov 2021 Xin Zhang, Shixiang Shane Gu, Yutaka Matsuo, Yusuke Iwasawa

We propose Domain Prompt Learning (DPL) as a novel approach for domain inference in the form of conditional prompt generation.

Domain Generalization Image Classification +2

Generalized Decision Transformer for Offline Hindsight Information Matching

1 code implementation19 Nov 2021 Hiroki Furuta, Yutaka Matsuo, Shixiang Shane Gu

We present Generalized Decision Transformer (GDT) for solving any HIM problem, and show how different choices for the feature function and the anti-causal aggregator not only recover DT as a special case, but also lead to novel Categorical DT (CDT) and Bi-directional DT (BDT) for matching different statistics of the future.

Continuous Control Imitation Learning +1

Improving the Robustness to Variations of Objects and Instructions with a Neuro-Symbolic Approach for Interactive Instruction Following

no code implementations13 Oct 2021 Kazutoshi Shinoda, Yuki Takezawa, Masahiro Suzuki, Yusuke Iwasawa, Yutaka Matsuo

An interactive instruction following task has been proposed as a benchmark for learning to map natural language instructions and first-person vision into sequences of actions to interact with objects in 3D environments.

Instruction Following

Scalable multimodal variational autoencoders with surrogate joint posterior

no code implementations29 Sep 2021 Masahiro Suzuki, Yutaka Matsuo

A state-of-the-art approach to learning this aggregation of experts is to encourage all modalities to be reconstructed and cross-generated from arbitrary subsets.

Learning Global Spatial Information for Multi-View Object-Centric Models

no code implementations29 Sep 2021 Yuya Kobayashi, Masahiro Suzuki, Yutaka Matsuo

Therefore, we introduce several crucial components which help inference and training with the proposed model.

Novel View Synthesis Object

Distributional Decision Transformer for Hindsight Information Matching

no code implementations ICLR 2022 Hiroki Furuta, Yutaka Matsuo, Shixiang Shane Gu

Inspired by distributional and state-marginal matching literatures in RL, we demonstrate that all these approaches are essentially doing hindsight information matching (HIM) -- training policies that can output the rest of trajectory that matches a given future state information statistics.

Continuous Control Imitation Learning +2

Pixyz: a Python library for developing deep generative models

no code implementations28 Jul 2021 Masahiro Suzuki, Takaaki Kaneko, Yutaka Matsuo

With the recent rapid progress in the study of deep generative models (DGMs), there is a need for a framework that can implement them in a simple and generic way.

Probabilistic Programming

Estimating Disentangled Belief about Hidden State and Hidden Task for Meta-RL

no code implementations14 May 2021 Kei Akuzawa, Yusuke Iwasawa, Yutaka Matsuo

Therefore, the meta-RL agent faces the challenge of specifying both the hidden task and states based on small amount of experience.

Inductive Bias Meta Reinforcement Learning

Co-Adaptation of Algorithmic and Implementational Innovations in Inference-based Deep Reinforcement Learning

1 code implementation NeurIPS 2021 Hiroki Furuta, Tadashi Kozuno, Tatsuya Matsushima, Yutaka Matsuo, Shixiang Shane Gu

These results show which implementation or code details are co-adapted and co-evolved with algorithms, and which are transferable across algorithms: as examples, we identified that tanh Gaussian policy and network sizes are highly adapted to algorithmic types, while layer normalization and ELU are critical for MPO's performances but also transfer to noticeable gains in SAC.

reinforcement-learning Reinforcement Learning (RL)

Group Equivariant Conditional Neural Processes

no code implementations ICLR 2021 Makoto Kawano, Wataru Kumagai, Akiyoshi Sannai, Yusuke Iwasawa, Yutaka Matsuo

We present the group equivariant conditional neural process (EquivCNP), a meta-learning method with permutation invariance in a data set as in conventional conditional neural processes (CNPs), and it also has transformation equivariance in data space.

Meta-Learning Translation +1

$q$-Deformation of Corner Vertex Operator Algebras by Miura Transformation

no code implementations11 Jan 2021 Koichi Harada, Yutaka Matsuo, Go Noshita, Akimi Watanabe

It gives the free field representation for $q$-deformed $Y_{L, M, N}$, which is obtained as a reduction of the quantum toroidal algebra.

High Energy Physics - Theory Mathematical Physics Mathematical Physics Quantum Algebra

Wheelchair Behavior Recognition for Visualizing Sidewalk Accessibility by Deep Neural Networks

no code implementations11 Jan 2021 Takumi Watanabe, Hiroki Takahashi, Goh Sato, Yusuke Iwasawa, Yutaka Matsuo, Ikuko Eguchi Yairi

This paper introduces our methodology to estimate sidewalk accessibilities from wheelchair behavior via a triaxial accelerometer in a smartphone installed under a wheelchair seat.

Information Theoretic Regularization for Learning Global Features by Sequential VAE

no code implementations1 Jan 2021 Kei Akuzawa, Yusuke Iwasawa, Yutaka Matsuo

However, by analyzing the sequential VAEs from the information theoretic perspective, we can claim that simply maximizing the MI encourages the latent variables to have redundant information and prevents the disentanglement of global and local features.

Disentanglement

Iterative Image Inpainting with Structural Similarity Mask for Anomaly Detection

no code implementations1 Jan 2021 Hitoshi Nakanishi, Masahiro Suzuki, Yutaka Matsuo

Moreover, there is objective mismatching that models are trained to minimize total reconstruction errors while we expect a small deviation on normal pixels and large deviation on anomalous pixels.

Image Inpainting Unsupervised Anomaly Detection

Subformer: A Parameter Reduced Transformer

no code implementations1 Jan 2021 Machel Reid, Edison Marrese-Taylor, Yutaka Matsuo

We also perform equally well as Transformer-big with 40% less parameters and outperform the model by 0. 7 BLEU with 12M less parameters.

Abstractive Text Summarization Language Modelling +2

Learning Deep Latent Variable Models via Amortized Langevin Dynamics

no code implementations1 Jan 2021 Shohei Taniguchi, Yusuke Iwasawa, Yutaka Matsuo

Developing a latent variable model and an inference model with neural networks, yields Langevin autoencoders (LAEs), a novel Langevin-based framework for deep generative models.

Unsupervised Anomaly Detection

Epipolar-Guided Deep Object Matching for Scene Change Detection

no code implementations30 Jul 2020 Kento Doi, Ryuhei Hamaguchi, Shun Iwase, Rio Yokota, Yutaka Matsuo, Ken Sakurada

To cope with the difficulty, we introduce a deep graph matching network that establishes object correspondence between an image pair.

Change Detection Graph Matching +2

Deployment-Efficient Reinforcement Learning via Model-Based Offline Optimization

1 code implementation ICLR 2021 Tatsuya Matsushima, Hiroki Furuta, Yutaka Matsuo, Ofir Nachum, Shixiang Gu

We propose a novel model-based algorithm, Behavior-Regularized Model-ENsemble (BREMEN) that can effectively optimize a policy offline using 10-20 times fewer data than prior works.

Offline RL reinforcement-learning +1

Variational Inference for Learning Representations of Natural Language Edits

1 code implementation20 Apr 2020 Edison Marrese-Taylor, Machel Reid, Yutaka Matsuo

Document editing has become a pervasive component of the production of information, with version control systems enabling edits to be efficiently stored and applied.

Variational Inference

Combining Pretrained High-Resource Embeddings and Subword Representations for Low-Resource Languages

no code implementations9 Mar 2020 Machel Reid, Edison Marrese-Taylor, Yutaka Matsuo

The contrast between the need for large amounts of data for current Natural Language Processing (NLP) techniques, and the lack thereof, is accentuated in the case of African languages, most of which are considered low-resource.

Translation Word Embeddings

Out-of-Distribution Detection Using Layerwise Uncertainty in Deep Neural Networks

no code implementations ICLR 2020 Hirono Okamoto, Masahiro Suzuki, Yutaka Matsuo

However, on difficult datasets or models with low classification ability, these methods incorrectly regard in-distribution samples close to the decision boundary as OOD samples.

Classification General Classification +1

Graph-based Knowledge Tracing: Modeling Student Proficiency Using Graph Neural Network

1 code implementation ACM 2019 Hiromi Nakagawa, Yusuke Iwasawa, Yutaka Matsuo

Inspired by the recent successes of the graph neural network (GNN), we herein propose a GNN-based knowledge tracing method, i. e., graph-based knowledge tracing.

Inductive Bias Knowledge Tracing +2

Stablizing Adversarial Invariance Induction by Discriminator Matching

no code implementations25 Sep 2019 Yusuke Iwasawa, Kei Akuzawa, Yutaka Matsuo

An adversarial invariance induction (AII) shows its power on this purpose, which maximizes the proxy of the conditional entropy between representations and attributes by adversarial training between an attribute discriminator and feature extractor.

Attribute Domain Generalization +2

Relation-based Generalized Zero-shot Classification with the Domain Discriminator on the shared representation

no code implementations25 Sep 2019 Masahiro Suzuki, Yutaka Matsuo

However, this relation-based approach presents a difficulty: many of the test images are predicted as biased to the seen domain, i. e., the \emph{domain bias problem}.

Attribute Generalized Zero-Shot Learning +1

An Edit-centric Approach for Wikipedia Article Quality Assessment

no code implementations WS 2019 Edison Marrese-Taylor, Pablo Loyola, Yutaka Matsuo

We propose an edit-centric approach to assess Wikipedia article quality as a complementary alternative to current full document-based techniques.

Variational Domain Adaptation

no code implementations ICLR 2019 Hirono Okamoto, Shohei Ohsawa, Itto Higuchi, Haruka Murakami, Mizuki Sango, Zhenghang Cui, Masahiro Suzuki, Hiroshi Kajino, Yutaka Matsuo

It reformulates the posterior with a natural paring $\langle, \rangle: \mathcal{Z} \times \mathcal{Z}^* \rightarrow \Real$, which can be expanded to uncountable infinite domains such as continuous domains as well as interpolation.

Bayesian Inference Domain Adaptation +2

Adversarial Invariant Feature Learning with Accuracy Constraint for Domain Generalization

no code implementations29 Apr 2019 Kei Akuzawa, Yusuke Iwasawa, Yutaka Matsuo

However, previous domain-invariance-based methods overlooked the underlying dependency of classes on domains, which is responsible for the trade-off between classification accuracy and domain invariance.

Domain Generalization

Gating Mechanisms for Combining Character and Word-level Word Representations: An Empirical Study

1 code implementation NAACL 2019 Jorge A. Balazs, Yutaka Matsuo

In this paper we study how different ways of combining character and word-level representations affect the quality of both final word and sentence representations.

Semantic Similarity Semantic Textual Similarity +2

DUAL SPACE LEARNING WITH VARIATIONAL AUTOENCODERS

no code implementations ICLR Workshop DeepGenStruct 2019 Hirono Okamoto, Masahiro Suzuki, Itto Higuchi, Shohei Ohsawa, Yutaka Matsuo

However, when the dimension of multiclass labels is large, these models cannot change images corresponding to labels, because learning multiple distributions of the corresponding class is necessary to transfer an image.

Invariant Feature Learning by Attribute Perception Matching

no code implementations ICLR Workshop LLD 2019 Yusuke Iwasawa, Kei Akuzawa, Yutaka Matsuo

An adversarial feature learning (AFL) is a powerful framework to learn representations invariant to a nuisance attribute, which uses an adversarial game between a feature extractor and a categorical attribute classifier.

Attribute

Content Aware Source Code Change Description Generation

no code implementations WS 2018 Pablo Loyola, Edison Marrese-Taylor, Jorge Balazs, Yutaka Matsuo, Fumiko Satoh

We propose to study the generation of descriptions from source code changes by integrating the messages included on code commits and the intra-code documentation inside the source in the form of docstrings.

Machine Translation Text Generation

Domain Generalization via Invariant Representation under Domain-Class Dependency

no code implementations27 Sep 2018 Kei Akuzawa, Yusuke Iwasawa, Yutaka Matsuo

Learning domain-invariant representation is a dominant approach for domain generalization, where we need to build a classifier that is robust toward domain shifts induced by change of users, acoustic or lighting conditions, etc.

Domain Generalization

Deep contextualized word representations for detecting sarcasm and irony

1 code implementation WS 2018 Suzana Ilić, Edison Marrese-Taylor, Jorge A. Balazs, Yutaka Matsuo

Predicting context-dependent and non-literal utterances like sarcastic and ironic expressions still remains a challenging task in NLP, as it goes beyond linguistic patterns, encompassing common sense and shared knowledge as crucial components.

Common Sense Reasoning

IIIDYT at IEST 2018: Implicit Emotion Classification With Deep Contextualized Word Representations

1 code implementation WS 2018 Jorge A. Balazs, Edison Marrese-Taylor, Yutaka Matsuo

In this paper we describe our system designed for the WASSA 2018 Implicit Emotion Shared Task (IEST), which obtained 2$^{\text{nd}}$ place out of 26 teams with a test macro F1 score of $0. 710$.

Emotion Classification General Classification +1

Learning to Automatically Generate Fill-In-The-Blank Quizzes

no code implementations WS 2018 Edison Marrese-Taylor, Ai Nakajima, Yutaka Matsuo, Ono Yuichi

In this paper we formalize the problem automatic fill-in-the-blank question generation using two standard NLP machine learning schemes, proposing concrete deep learning models for each.

BIG-bench Machine Learning Question Generation +1

Expressive Speech Synthesis via Modeling Expressions with Variational Autoencoder

no code implementations6 Apr 2018 Kei Akuzawa, Yusuke Iwasawa, Yutaka Matsuo

Recent advances in neural autoregressive models have improve the performance of speech synthesis (SS).

Expressive Speech Synthesis

Improving Bi-directional Generation between Different Modalities with Variational Autoencoders

no code implementations26 Jan 2018 Masahiro Suzuki, Kotaro Nakayama, Yutaka Matsuo

However, we found that when this model attempts to generate a large dimensional modality missing at the input, the joint representation collapses and this modality cannot be generated successfully.

Neuron as an Agent

no code implementations ICLR 2018 Shohei Ohsawa, Kei Akuzawa, Tatsuya Matsushima, Gustavo Bezerra, Yusuke Iwasawa, Hiroshi Kajino, Seiya Takenaka, Yutaka Matsuo

Existing multi-agent reinforcement learning (MARL) communication methods have relied on a trusted third party (TTP) to distribute reward to agents, leaving them inapplicable in peer-to-peer environments.

counterfactual Multi-agent Reinforcement Learning +3

Censoring Representations with Multiple-Adversaries over Random Subspaces

no code implementations ICLR 2018 Yusuke Iwasawa, Kotaro Nakayama, Yutaka Matsuo

AFL learn such a representations by training the networks to deceive the adversary that predict the sensitive information from the network, and therefore, the success of the AFL heavily relies on the choice of the adversary.

EmoAtt at EmoInt-2017: Inner attention sentence embedding for Emotion Intensity

1 code implementation WS 2017 Edison Marrese-Taylor, Yutaka Matsuo

In this paper we describe a deep learning system that has been designed and built for the WASSA 2017 Emotion Intensity Shared Task.

Sentence Sentence Embedding +1

Mining fine-grained opinions on closed captions of YouTube videos with an attention-RNN

1 code implementation WS 2017 Edison Marrese-Taylor, Jorge A. Balazs, Yutaka Matsuo

These results, as well as further experiments on domain adaptation for aspect extraction, suggest that differences between speech and written text, which have been discussed extensively in the literature, also extend to the domain of product reviews, where they are relevant for fine-grained opinion mining.

Aspect Extraction Domain Adaptation +3

Okutama-Action: An Aerial View Video Dataset for Concurrent Human Action Detection

no code implementations9 Jun 2017 Mohammadamin Barekatain, Miquel Martí, Hsueh-Fu Shih, Samuel Murray, Kotaro Nakayama, Yutaka Matsuo, Helmut Prendinger

Despite significant progress in the development of human action detection datasets and algorithms, no current dataset is representative of real-world aerial view scenarios.

Action Detection

A Neural Architecture for Generating Natural Language Descriptions from Source Code Changes

1 code implementation ACL 2017 Pablo Loyola, Edison Marrese-Taylor, Yutaka Matsuo

We propose a model to automatically describe changes introduced in the source code of a program using natural language.

Neural Machine Translation with Latent Semantic of Image and Text

no code implementations25 Nov 2016 Joji Toyama, Masanori Misono, Masahiro Suzuki, Kotaro Nakayama, Yutaka Matsuo

The report of earlier studies has introduced a latent variable to capture the entire meaning of sentence and achieved improvement on attention-based Neural Machine Translation.

Machine Translation Sentence +1

Joint Multimodal Learning with Deep Generative Models

2 code implementations7 Nov 2016 Masahiro Suzuki, Kotaro Nakayama, Yutaka Matsuo

As described herein, we propose a joint multimodal variational autoencoder (JMVAE), in which all modalities are independently conditioned on joint representation.

Understanding Rating Behaviour and Predicting Ratings by Identifying Representative Users

no code implementations PACLIC 2015 Rahul Kamath, Masanao Ochi, Yutaka Matsuo

While previous approaches to obtaining product ratings require either a large number of user ratings or a few review texts, we show that it is possible to predict ratings with few user ratings and no review text.

Cannot find the paper you are looking for? You can Submit a new open access paper.