Search Results for author: Kazushi Ikeda

Found 14 papers, 3 papers with code

Counterfactual Reasoning Using Predicted Latent Personality Dimensions for Optimizing Persuasion Outcome

no code implementations21 Apr 2024 Donghuo Zeng, Roberto S. Legaspi, Yuewen Sun, Xinshuai Dong, Kazushi Ikeda, Peter Spirtes, Kun Zhang

In this paper, we present a novel approach that tracks a user's latent personality dimensions (LPDs) during ongoing persuasion conversation and generates tailored counterfactual utterances based on these LPDs to optimize the overall persuasion outcome.

counterfactual Counterfactual Reasoning +1

Anchor-aware Deep Metric Learning for Audio-visual Retrieval

no code implementations21 Apr 2024 Donghuo Zeng, Yanan Wang, Kazushi Ikeda, Yi Yu

However, the model training fails to fully explore the space due to the scarcity of training data points, resulting in an incomplete representation of the overall positive and negative distributions.

Cross-Modal Retrieval Metric Learning +1

Two-Stage Triplet Loss Training with Curriculum Augmentation for Audio-Visual Retrieval

no code implementations20 Oct 2023 Donghuo Zeng, Kazushi Ikeda

We propose a two-stage training paradigm that guides the model's learning process from semi-hard to hard triplets.

Cross-Modal Retrieval Retrieval

Topic-switch adapted Japanese Dialogue System based on PLATO-2

no code implementations22 Feb 2023 Donghuo Zeng, Jianming Wu, Yanan Wang, Kazunori Matsumoto, Gen Hattori, Kazushi Ikeda

Furthermore, our proposed topic-switch algorithm achieves an average score of 1. 767 and outperforms PLATO-JDS by 0. 267, indicating its effectiveness in improving the user experience of our system.

Dialogue Generation Informativeness

Complete Cross-triplet Loss in Label Space for Audio-visual Cross-modal Retrieval

no code implementations7 Nov 2022 Donghuo Zeng, Yanan Wang, Jianming Wu, Kazushi Ikeda

In this paper, to reduce the interference of hard negative samples in representation learning, we propose a new AV-CMR model to optimize semantic features by directly predicting labels and then measuring the intrinsic correlation between audio-visual data using complete cross-triple loss.

Cross-Modal Retrieval Representation Learning +1

Compositionality-Aware Graph2Seq Learning

1 code implementation28 Jan 2022 Takeshi D. Itoh, Takatomi Kubo, Kazushi Ikeda

It is expected that the compositionality in a graph can be associated to the compositionality in the output sequence in many graph2seq tasks.

Code Summarization Source Code Summarization

Multi-Level Attention Pooling for Graph Neural Networks: Unifying Graph Representations with Multiple Localities

no code implementations2 Mar 2021 Takeshi D. Itoh, Takatomi Kubo, Kazushi Ikeda

It has an attention pooling layer for each message passing step and computes the final graph representation by unifying the layer-wise graph representations.

Graph Classification

Detecting Unknown Behaviors by Pre-defined Behaviours: An Bayesian Non-parametric Approach

no code implementations25 Nov 2019 Jin Watanabe, Takatomi Kubo, Fan Yang, Kazushi Ikeda

An automatic mouse behavior recognition system can considerably reduce the workload of experimenters and facilitate the analysis process.

A Hierarchical Mixture Density Network

no code implementations23 Oct 2019 Fan Yang, Jaymar Soriano, Takatomi Kubo, Kazushi Ikeda

One of the complicated relationships among three correlated variables could be a two-layer hierarchical many-to-many mapping.

Skip-connection and batch-normalization improve data separation ability

no code implementations20 Mar 2019 Yasutaka Furusho, Kazushi Ikeda

The ResNet and the batch-normalization (BN) achieved high performance even when only a few labeled data are available.

Efficient learning with robust gradient descent

no code implementations1 Jun 2017 Matthew J. Holland, Kazushi Ikeda

Minimizing the empirical risk is a popular training strategy, but for learning tasks where the data may be noisy or heavy-tailed, one may require many observations in order to generalize well.

Cannot find the paper you are looking for? You can Submit a new open access paper.