1 code implementation • 26 Jun 2024 • Yunzhen He, Hiroaki Yamagiwa, Hidetoshi Shimodaira
Our approach involves a first step of extracting the relevant sections from the EHR.
no code implementations • 16 Jun 2024 • Hiroaki Yamagiwa, Momose Oyama, Hidetoshi Shimodaira
Cosine similarity is widely used to measure the similarity between two embeddings, while interpretations based on angle and correlation coefficient are common.
1 code implementation • 3 Jun 2024 • Hiroaki Yamagiwa, Ryoma Hashimoto, Kiwamu Arakane, Ken Murakami, Shou Soeda, Momose Oyama, Mariko Okada, Hidetoshi Shimodaira
In this study, we demonstrate that BioConceptVec embeddings, along with our own embeddings trained on PubMed abstracts, contain information about drug-gene relations and can predict target genes from a given drug through analogy computations.
1 code implementation • 11 Jan 2024 • Yihua Zhu, Hidetoshi Shimodaira
The primary aim of Knowledge Graph embeddings (KGE) is to learn low-dimensional representations of entities and relations for predicting missing facts.
1 code implementation • 11 Jan 2024 • Hiroaki Yamagiwa, Yusuke Takase, Hidetoshi Shimodaira
Inspired by Word Tour, a one-dimensional word embedding method, we aim to improve the clarity of the word embedding space by maximizing the semantic continuity of the axes.
1 code implementation • 21 Sep 2023 • Yoichi Ishibashi, Hidetoshi Shimodaira
We explore a knowledge sanitization approach to mitigate the privacy concerns associated with large language models (LLMs).
1 code implementation • 22 May 2023 • Hiroaki Yamagiwa, Momose Oyama, Hidetoshi Shimodaira
This study utilizes Independent Component Analysis (ICA) to unveil a consistent semantic structure within embeddings of words or images.
1 code implementation • 22 May 2023 • Yihua Zhu, Hidetoshi Shimodaira
The main objective of Knowledge Graph (KG) embeddings is to learn low-dimensional representations of entities and relations, enabling the prediction of missing facts.
no code implementations • 19 Dec 2022 • Momose Oyama, Sho Yokoi, Hidetoshi Shimodaira
Distributed representations of words encode lexical semantic information, but what type of information is encoded and how?
1 code implementation • 11 Nov 2022 • Hiroaki Yamagiwa, Sho Yokoi, Hidetoshi Shimodaira
The proposed method is based on the Fused Gromov-Wasserstein distance, which simultaneously considers the similarity of the word embedding and the SAM for calculating the optimal transport between two sentences.
no code implementations • 28 Dec 2021 • Ruixing Cao, Akifumi Okuno, Kei Nakagawa, Hidetoshi Shimodaira
For correcting the asymptotic bias with fewer observations, this paper proposes a \emph{local radial regression (LRR)} and its logistic regression variant called \emph{local radial logistic regression~(LRLR)}, by combining the advantages of LPoR and MS-$k$-NN.
no code implementations • 18 May 2021 • Masahiro Naito, Sho Yokoi, Geewook Kim, Hidetoshi Shimodaira
(Q2) Ordinary additive compositionality can be seen as an AND operation of word meanings, but it is not well understood how other operations, such as OR and NOT, can be computed by the embeddings.
no code implementations • NeurIPS 2020 • Akifumi Okuno, Hidetoshi Shimodaira
The weights and the parameter $k \in \mathbb{N}$ regulate its bias-variance trade-off, and the trade-off implicitly affects the convergence rate of the excess risk for the $k$-NN classifier; several existing studies considered selecting optimal $k$ and weights to obtain faster convergence rate.
no code implementations • 2 May 2020 • Morihiro Mizutani, Akifumi Okuno, Geewook Kim, Hidetoshi Shimodaira
Multimodal relational data analysis has become of increasing importance in recent years, for exploring across different domains of data, such as images and their text tags obtained from social networking services (e. g., Flickr).
no code implementations • 8 Feb 2020 • Akifumi Okuno, Hidetoshi Shimodaira
The weights and the parameter $k \in \mathbb{N}$ regulate its bias-variance trade-off, and the trade-off implicitly affects the convergence rate of the excess risk for the $k$-NN classifier; several existing studies considered selecting optimal $k$ and weights to obtain faster convergence rate.
1 code implementation • 14 Oct 2019 • Jen Ning Lim, Makoto Yamada, Wittawat Jitkrittum, Yoshikazu Terada, Shigeyuki Matsui, Hidetoshi Shimodaira
An approach for addressing this is via conditioning on the selection procedure to account for how we have used the data to generate our hypotheses, and prevent information to be used again after selection.
no code implementations • 22 Jul 2019 • Akifumi Okuno, Hidetoshi Shimodaira
A collection of $U \: (\in \mathbb{N})$ data vectors is called a $U$-tuple, and the association strength among the vectors of a tuple is termed as the \emph{hyperlink weight}, that is assumed to be symmetric with respect to permutation of the entries in the index.
1 code implementation • 27 Feb 2019 • Geewook Kim, Akifumi Okuno, Kazuki Fukui, Hidetoshi Shimodaira
In addition to the parameters of neural networks, we optimize the weights of the inner product by allowing positive and negative values.
no code implementations • 22 Feb 2019 • Akifumi Okuno, Hidetoshi Shimodaira
We propose $\beta$-graph embedding for robustly learning feature vectors from data vectors and noisy link weights.
no code implementations • 21 Feb 2019 • Shinpei Imori, Hidetoshi Shimodaira
Utilizing a parametric model of joint distribution of primary and auxiliary variables, it is possible to improve the estimation of parametric model for the primary variables when the auxiliary variables are closely related to the primary variables.
1 code implementation • WS 2018 • Geewook Kim, Kazuki Fukui, Hidetoshi Shimodaira
We propose a new word embedding method called \textit{word-like character} n\textit{-gram embedding}, which learns distributed representations of words by embedding word-like character n-grams.
no code implementations • 4 Oct 2018 • Akifumi Okuno, Geewook Kim, Hidetoshi Shimodaira
We propose shifted inner-product similarity (SIPS), which is a novel yet very simple extension of the ordinary inner-product similarity (IPS) for neural-network based graph embedding (GE).
2 code implementations • NAACL 2019 • Geewook Kim, Kazuki Fukui, Hidetoshi Shimodaira
We propose a new type of representation learning method that models words, phrases and sentences seamlessly.
no code implementations • 31 May 2018 • Akifumi Okuno, Hidetoshi Shimodaira
We consider the representation power of siamese-style similarity functions used in neural network-based graph embedding.
no code implementations • ICML 2018 • Akifumi Okuno, Tetsuya Hada, Hidetoshi Shimodaira
PMvGE is a probabilistic model for predicting new associations via graph embedding of the nodes of data vectors with links of their associations.
no code implementations • WS 2017 • Kazuki Fukui, Takamasa Oshikiri, Hidetoshi Shimodaira
In this paper, we propose a novel method for multimodal word embedding, which exploit a generalized framework of multi-view spectral graph embedding to take into account visual appearances or scenes denoted by words in a corpus.
1 code implementation • 20 Apr 2017 • Thong Pham, Paul Sheridan, Hidetoshi Shimodaira
This paper introduces the R package PAFit, which implements non-parametric procedures for estimating the preferential attachment function and node fitnesses in a growing network, as well as a number of functions for generating complex networks from these two mechanisms.
Data Analysis, Statistics and Probability Social and Information Networks Physics and Society Computation
no code implementations • 29 Mar 2015 • Hidetoshi Shimodaira
For dimensionality reduction, we consider a linear transformation of data vectors, and define a matching error as the weighted sum of squared distances between transformed vectors with respect to the matching weights.
no code implementations • 29 Dec 2014 • Hidetoshi Shimodaira
These data vectors from multiple domains are projected to a common space by linear transformations in order to search closely related vectors across domains.