no code implementations • ICML 2020 • Yasutoshi Ida, Sekitoshi Kanai, Yasuhiro Fujiwara, Tomoharu Iwata, Koh Takeuchi, Hisashi Kashima
This is because coordinate descent iteratively updates all the parameters in the objective until convergence.
1 code implementation • 12 Jun 2025 • Kyohei Atarashi, Satoshi Oyama, Hiromi Arai, Hisashi Kashima
In this work, we propose the box-constrained softmax ($\mathrm{BCSoftmax}$) function, a novel generalization of the $\mathrm{Softmax}$ function that explicitly enforces lower and upper bounds on output probabilities.
no code implementations • 12 Mar 2025 • Katsumi Takahashi, Koh Takeuchi, Hisashi Kashima
Machine learning models usually assume that a set of feature values used to obtain an output is fixed in advance.
no code implementations • 24 Feb 2025 • Yaqi Sun, Kyohei Atarashi, Koh Takeuchi, Hisashi Kashima
Large Vision-Language Models (LVLMs) integrate image encoders with Large Language Models (LLMs) to process multi-modal inputs and perform complex visual tasks.
no code implementations • 29 Dec 2024 • Shonosuke Harada, Ryosuke Yoneda, Hisashi Kashima
While most studies focus on treatment effect estimation on individual targets, in specific contexts, there is a necessity to comprehend the treatment effect on a group of targets, especially those that have relationships represented as a graph structure between them.
no code implementations • 18 Dec 2024 • Junki Mori, Kosuke Kihara, Taiki Miyagawa, Akinori F. Ebihara, Isamu Teranishi, Hisashi Kashima
Our contribution is the novel Federated learning with Weighted Cluster Aggregation (FedWCA) method, designed to mitigate both domain shifts and privacy concerns with only unlabeled data.
no code implementations • 30 Nov 2024 • Yasuaki Sumita, Koh Takeuchi, Hisashi Kashima
To test the effectiveness of these methods, we conducted experiments on GPT-3. 5 and GPT-4 to evaluate the influence of six biases on the outputs before and after applying these methods.
1 code implementation • 31 Oct 2024 • Tomas Rigaux, Hisashi Kashima
Our experiments, performed on smaller networks than the initial AlphaZero paper, show that this new architecture outperforms previous architectures with a similar number of parameters, being able to increase playing strength an order of magnitude faster.
no code implementations • 25 Oct 2024 • Ryota Maruo, Koh Takeuchi, Hisashi Kashima
We conducted experiments to learn a strategy-proof matching from matching examples with different numbers of agents.
no code implementations • 23 Oct 2024 • Ryota Maruo, Koh Takeuchi, Hisashi Kashima
NRR is built from a differentiable relaxation of RR and can be trained to learn the agent ordering used for RR.
no code implementations • 2 Oct 2024 • Xiaotian Lu, Jiyi Li, Koh Takeuchi, Hisashi Kashima
Question answering (QA) tasks have been extensively studied in the field of natural language processing (NLP).
no code implementations • 28 Sep 2024 • Jiuding Duan, Jiyi Li, Yukino Baba, Hisashi Kashima
Intransitivity is a critical issue in pairwise preference modeling.
no code implementations • 10 Jul 2024 • Shun Ito, Hisashi Kashima
Crowdsourcing is an easy, cheap, and fast way to perform large scale quality assessment; however, human judgments are often influenced by cognitive biases, which lowers their credibility.
1 code implementation • 17 May 2024 • Xiaotian Lu, Jiyi Li, Zhen Wan, Xiaofeng Lin, Koh Takeuchi, Hisashi Kashima
The development of methods to explain models has become a key issue in the reliability of deep learning models in many important applications.
no code implementations • 15 Mar 2024 • Guoxi Zhang, Han Bao, Hisashi Kashima
To address this problem, the present study introduces a framework that consolidates offline preferences and \emph{virtual preferences} for PbRL, which are comparisons between the agent's behaviors and the offline data.
no code implementations • CVPR 2024 • Yu Mitsuzumi, Akisato Kimura, Hisashi Kashima
Source-free Domain Adaptation (SFDA) is an emerging and challenging research area that addresses the problem of unsupervised domain adaptation (UDA) without source data.
1 code implementation • 25 Sep 2023 • Xiaofeng Lin, Guoxi Zhang, Xiaotian Lu, Han Bao, Koh Takeuchi, Hisashi Kashima
One popular application of this estimation lies in the prediction of the impact of a treatment (e. g., a promotion) on an outcome (e. g., sales) of a particular unit (e. g., an item), known as the individual treatment effect (ITE).
1 code implementation • 21 Aug 2023 • Kosuke Yoshimura, Hisashi Kashima
A major advantage of the proposed method is that it can be applied to almost all variants of supervised learning problems by simply adding a selector network and changing the objective function for existing models, without explicitly assuming a model of the noise in crowd annotations.
no code implementations • 18 Aug 2023 • Jill-Jênn Vie, Hisashi Kashima
Knowledge tracing consists in predicting the performance of some students on new questions given their performance on previous questions, and can be a prior step to optimizing assessment and learning.
1 code implementation • The 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining 2023 • Koh Takeuchi, Ryo Nishida, Hisashi Kashima, Masaki Onishi
To address this problem, we propose a spatial intervention neural network (SINet) that leverages the hierarchical structure of spatial graphs to learn a rich representation of the covariates and the treatments and exploits this representation to predict a time series of treatment outcome.
no code implementations • 20 Jul 2023 • Ryosuke Ueda, Koh Takeuchi, Hisashi Kashima
The experimental results suggest that the combination of Soft D&S and data splitting as a fairness option is effective for dense data, whereas weighted majority voting is effective for sparse data.
no code implementations • 25 Feb 2023 • Ryosuke Ueda, Koh Takeuchi, Hisashi Kashima
Crowdsourcing has been widely used to efficiently obtain labeled datasets for supervised learning from large numbers of human resources at low cost.
1 code implementation • 8 Feb 2023 • Xiaotian Lu, Jiyi Li, Koh Takeuchi, Hisashi Kashima
Crowdsourcing has been used to collect data at scale in numerous fields.
1 code implementation • 20 Dec 2022 • Jill-Jênn Vie, Tomas Rigaux, Hisashi Kashima
Factorization machines (FMs) are a powerful tool for regression and classification in the context of sparse observations, that has been successfully applied to collaborative filtering, especially when side information over users or items is available.
1 code implementation • 29 Nov 2022 • Guoxi Zhang, Hisashi Kashima
To overcome this drawback, the present study proposes a latent variable model to infer a set of policies from data, which allows an agent to use as behavior policy the policy that best describes a particular trajectory.
no code implementations • 22 Oct 2022 • Hisashi Kashima, Satoshi Oyama, Hiromi Arai, Junichiro Mori
Human computation is an approach to solving problems that prove difficult using AI only, and involves the cooperation of many humans.
1 code implementation • 21 Aug 2022 • Ryoma Sato, Makoto Yamada, Hisashi Kashima
The main difficulty in investigating the effects is that we need to know counterfactual results, which are not available in reality.
no code implementations • 1 Jun 2022 • Yoichi Chikahara, Makoto Yamada, Hisashi Kashima
Finding the features relevant to the difference in treatment effects is essential to unveil the underlying causal mechanisms.
no code implementations • 27 Apr 2022 • Shin'ya Yamaguchi, Sekitoshi Kanai, Atsutoshi Kumagai, Daiki Chijiwa, Hisashi Kashima
To transfer source knowledge without these assumptions, we propose a transfer learning method that uses deep generative models and is composed of the following two stages: pseudo pre-training (PP) and pseudo semi-supervised learning (P-SSL).
2 code implementations • 15 Dec 2021 • Sein Minn, Jill-Jenn Vie, Koh Takeuchi, Hisashi Kashima, Feida Zhu
IKT's prediction of future student performance is made using a Tree-Augmented Naive Bayes Classifier (TAN), therefore its predictions are easier to explain than deep learning-based student models.
no code implementations • 8 Nov 2021 • Guoxi Zhang, Hisashi Kashima
This paper addresses the lack of reward in a batch reinforcement learning setting by learning a reward function from preferences.
no code implementations • 27 Jun 2021 • Xiaotian Lu, Arseny Tolmachev, Tatsuya Yamamoto, Koh Takeuchi, Seiji Okajima, Tomoyoshi Takebayashi, Koji Maruhashi, Hisashi Kashima
In order to compare various saliency-based XAI methods quantitatively, several approaches for automated evaluation schemes have been proposed; however, there is no guarantee that such automated evaluation metrics correctly evaluate explainability, and a high rating by an automated evaluation scheme does not necessarily mean a high explainability for humans.
1 code implementation • 11 Jun 2021 • Luu Huu Phuc, Koh Takeuchi, Seiji Okajima, Arseny Tolmachev, Tomoyoshi Takebayashi, Koji Maruhashi, Hisashi Kashima
Multi-relational graph is a ubiquitous and important data structure, allowing flexible representation of multiple types of interactions and relations between entities.
1 code implementation • 30 May 2021 • Ryoma Sato, Makoto Yamada, Hisashi Kashima
The original study on WMD reported that WMD outperforms classical baselines such as bag-of-words (BOW) and TF-IDF by significant margins in various datasets.
no code implementations • 24 May 2021 • Maya Okawa, Tomoharu Iwata, Yusuke Tanaka, Hiroyuki Toda, Takeshi Kurashima, Hisashi Kashima
Hawkes processes offer a central tool for modeling the diffusion processes, in which the influence from the past events is described by the triggering kernel.
no code implementations • EACL 2021 • Ayato Toyokuni, Sho Yokoi, Hisashi Kashima, Makoto Yamada
The problem of estimating the probability distribution of labels has been widely studied as a label distribution learning (LDL) problem, whose applications include age estimation, emotion analysis, and semantic segmentation.
1 code implementation • 8 Feb 2021 • Koh Takeuchi, Ryo Nishida, Hisashi Kashima, Masaki Onishi
In this paper, we consider the problem of estimating the effects of crowd movement guidance from past data.
1 code implementation • 19 Oct 2020 • Ryoma Sato, Makoto Yamada, Hisashi Kashima
We use a bias correction method to estimate the potential impact of choosing a publication venue effectively and to recommend venues based on the potential impact of papers in each venue.
no code implementations • 29 Sep 2020 • Shonosuke Harada, Hisashi Kashima
Outcome estimation of treatments for target individuals is an important foundation for decision making based on causal relations.
no code implementations • 18 Sep 2020 • Yang Liu, Hisashi Kashima
Predicting the chemical properties of compounds is crucial in discovering novel materials and drugs with specific desired characteristics.
no code implementations • 1 Aug 2020 • Yukino Baba, Jiyi Li, Hisashi Kashima
We propose an approach, called CrowDEA, which estimates the embeddings of the ideas in the multiple-criteria preference space, the best viewpoint for each idea, and preference criterion for each evaluator, to obtain a set of frontier ideas.
no code implementations • 10 Jun 2020 • Akira Tanimoto, Tomoya Sakai, Takashi Takenouchi, Hisashi Kashima
Predicting which action (treatment) will lead to a better outcome is a central task in decision support systems.
1 code implementation • NeurIPS 2020 • Ryoma Sato, Makoto Yamada, Hisashi Kashima
This study examines the time complexities of the unbalanced optimal transport problems from an algorithmic perspective for the first time.
no code implementations • 11 May 2020 • Shonosuke Harada, Hisashi Kashima
Individual treatment effect (ITE) represents the expected improvement in the outcome of taking a particular action to a particular target, and plays important roles in decision making in various domains.
no code implementations • 17 Feb 2020 • Yoichi Chikahara, Shinsaku Sakaue, Akinori Fujino, Hisashi Kashima
To avoid restrictive functional assumptions, we define the {\it probability of individual unfairness} (PIU) and solve an optimization problem where PIU's upper bound, which can be estimated from data, is controlled to be close to zero.
1 code implementation • 8 Feb 2020 • Ryoma Sato, Makoto Yamada, Hisashi Kashima
Through experiments, we show that the addition of random features enables GNNs to solve various problems that normal GNNs, including the graph convolutional networks (GCNs) and graph isomorphism networks (GINs), cannot solve.
1 code implementation • 5 Feb 2020 • Ryoma Sato, Marco Cuturi, Makoto Yamada, Hisashi Kashima
Building on \cite{memoli-2011}, who proposed to represent each point in each distribution as the 1D distribution of its distances to all other points, we introduce in this paper the Anchor Energy (AE) and Anchor Wasserstein (AW) distances, which are respectively the energy and Wasserstein distances instantiated on such representations.
no code implementations • NeurIPS 2019 • Yasutoshi Ida, Yasuhiro Fujiwara, Hisashi Kashima
Block Coordinate Descent is a standard approach to obtain the parameters of Sparse Group Lasso, and iteratively updates the parameters for each parameter group.
no code implementations • NeurIPS 2019 • Ryoma Sato, Makoto Yamada, Hisashi Kashima
We theoretically demonstrate that the most powerful GNN can learn approximation algorithms for the minimum dominating set problem and the minimum vertex cover problem with some approximation ratios with the aid of the theory of distributed local algorithms.
1 code implementation • 26 Feb 2019 • Ryoma Sato, Makoto Yamada, Hisashi Kashima
We propose HiSampler, the hard instance sampler, to model the hard instance distribution of graph algorithms.
no code implementations • 26 Feb 2019 • Tatsuya Shiraishi, Tam Le, Hisashi Kashima, Makoto Yamada
In this paper, we propose the topological Bayesian optimization, which can efficiently find an optimal solution from structured data using \emph{topological information}.
1 code implementation • NeurIPS 2019 • Rafael Pinot, Laurent Meunier, Alexandre Araujo, Hisashi Kashima, Florian Yger, Cédric Gouy-Pailler, Jamal Atif
This paper investigates the theory of robustness against adversarial attacks.
no code implementations • 23 Jan 2019 • Ryoma Sato, Makoto Yamada, Hisashi Kashima
The recent advancements in graph neural networks (GNNs) have led to state-of-the-art performances in various applications, including chemo-informatics, question-answering systems, and recommender systems.
2 code implementations • 8 Nov 2018 • Jill-Jênn Vie, Hisashi Kashima
Knowledge tracing is a sequence prediction problem where the goal is to predict the outcomes of students over questions as they are interacting with a learning platform.
no code implementations • 4 Oct 2018 • Shonosuke Harada, Hirotaka Akita, Masashi Tsubaki, Yukino Baba, Ichigaku Takigawa, Yoshihiro Yamanishi, Hisashi Kashima
Graphs are general and powerful data representations which can model complex real-world phenomena, ranging from chemical compounds to social networks; however, effective feature extraction from graphs is not a trivial task, and much work has been done in the field of machine learning and data mining.
1 code implementation • 4 Jul 2018 • Hirotaka Akita, Kosuke Nakago, Tomoki Komatsu, Yohei Sugawara, Shin-ichi Maeda, Yukino Baba, Hisashi Kashima
A possible approach to answer this question is to visualize evidence substructures responsible for the predictions.
2 code implementations • 3 Sep 2017 • Jill-Jênn Vie, Florian Yger, Ryan Lahfa, Basile Clement, Kévin Cocchi, Thomas Chalumeau, Hisashi Kashima
Item cold-start is a classical issue in recommender systems that affects anime and manga recommendations as well.
no code implementations • NeurIPS 2016 • Kaito Fujii, Hisashi Kashima
In contrast, there have been few methods for stream-based active learning based on adaptive submodularity.
1 code implementation • 8 Jun 2015 • Junpei Komiyama, Junya Honda, Hisashi Kashima, Hiroshi Nakagawa
We study the $K$-armed dueling bandit problem, a variation of the standard stochastic bandit problem where the feedback is limited to relative comparisons of a pair of arms.
no code implementations • NeurIPS 2011 • Ryota Tomioka, Taiji Suzuki, Kohei Hayashi, Hisashi Kashima
We analyze the statistical performance of a recently proposed convex tensor decomposition algorithm.
no code implementations • NeurIPS 2007 • Tsuyoshi Kato, Hisashi Kashima, Masashi Sugiyama, Kiyoshi Asai
In this paper, we propose a novel MTL algorithm that can overcome these problems.
no code implementations • NeurIPS 2007 • Masashi Sugiyama, Shinichi Nakajima, Hisashi Kashima, Paul V. Buenau, Motoaki Kawanabe
In this paper, we propose a direct importance estimation method that does not require the input density estimates.