Search Results for author: Shinichi Nakajima

Found 42 papers, 8 papers with code

Solution Simplex Clustering for Heterogeneous Federated Learning

no code implementations5 Mar 2024 Dennis Grinwald, Philipp Wiesner, Shinichi Nakajima

We tackle a major challenge in federated learning (FL) -- achieving good performance under highly heterogeneous client distributions.

Clustering Federated Learning

Labeling Neural Representations with Inverse Recognition

1 code implementation NeurIPS 2023 Kirill Bykov, Laura Kopf, Shinichi Nakajima, Marius Kloft, Marina M. -C. Höhne

Deep Neural Networks (DNNs) demonstrate remarkable capabilities in learning complex hierarchical data representations, but the nature of these representations remains largely unknown.

Decision Making Segmentation

Generative Fractional Diffusion Models

no code implementations26 Oct 2023 Gabriel Nobis, Marco Aversa, Maximilian Springenberg, Michael Detzel, Stefano Ermon, Shinichi Nakajima, Roderick Murray-Smith, Sebastian Lapuschkin, Christoph Knochenhauer, Luis Oala, Wojciech Samek

We generalize the continuous time framework for score-based generative models from an underlying Brownian motion (BM) to an approximation of fractional Brownian motion (FBM).

Detecting and Mitigating Mode-Collapse for Flow-based Sampling of Lattice Field Theories

no code implementations27 Feb 2023 Kim A. Nicoli, Christopher J. Anders, Tobias Hartung, Karl Jansen, Pan Kessel, Shinichi Nakajima

In this work, we first point out that the tunneling problem is also present for normalizing flows but is shifted from the sampling to the training phase of the algorithm.

Domain-Specific Word Embeddings with Structure Prediction

1 code implementation6 Oct 2022 Stephanie Brandl, David Lassner, Anne Baillot, Shinichi Nakajima

Complementary to finding good general word embeddings, an important question for representation learning is to find dynamic word embeddings, e. g., across time or domain.

Philosophy Representation Learning +1

Gradients should stay on Path: Better Estimators of the Reverse- and Forward KL Divergence for Normalizing Flows

no code implementations17 Jul 2022 Lorenz Vaitl, Kim A. Nicoli, Shinichi Nakajima, Pan Kessel

We propose an algorithm to estimate the path-gradient of both the reverse and forward Kullback-Leibler divergence for an arbitrary manifestly invertible normalizing flow.

Variational Inference

Self-Supervised Training with Autoencoders for Visual Anomaly Detection

no code implementations23 Jun 2022 Alexander Bauer, Shinichi Nakajima, Klaus-Robert Müller

This insight makes the reconstruction error a natural choice for defining the anomaly score of a sample according to its distance from a corresponding projection on the data manifold.

Anomaly Detection Dimensionality Reduction +2

Path-Gradient Estimators for Continuous Normalizing Flows

1 code implementation17 Jun 2022 Lorenz Vaitl, Kim A. Nicoli, Shinichi Nakajima, Pan Kessel

Recent work has established a path-gradient estimator for simple variational Gaussian distributions and has argued that the path-gradient is particularly beneficial in the regime in which the variational distribution approaches the exact target distribution.

Mixture-of-experts VAEs can disregard variation in surjective multimodal data

no code implementations11 Apr 2022 Jannik Wolff, Tassilo Klein, Moin Nabi, Rahul G. Krishnan, Shinichi Nakajima

Machine learning systems are often deployed in domains that entail data from multiple modalities, for example, phenotypic and genotypic characteristics describe patients in healthcare.

Visualizing the Diversity of Representations Learned by Bayesian Neural Networks

no code implementations26 Jan 2022 Dennis Grinwald, Kirill Bykov, Shinichi Nakajima, Marina M. -C. Höhne

Explainable Artificial Intelligence (XAI) aims to make learning machines less opaque, and offers researchers and practitioners various tools to reveal the decision-making strategies of neural networks.

Contrastive Learning Decision Making +2

Hierarchical Multimodal Variational Autoencoders

no code implementations29 Sep 2021 Jannik Wolff, Rahul G Krishnan, Lukas Ruff, Jan Nikolas Morshuis, Tassilo Klein, Shinichi Nakajima, Moin Nabi

Humans find structure in natural phenomena by absorbing stimuli from multiple input sources such as vision, text, and speech.

Explaining Bayesian Neural Networks

no code implementations23 Aug 2021 Kirill Bykov, Marina M. -C. Höhne, Adelaida Creosteanu, Klaus-Robert Müller, Frederick Klauschen, Shinichi Nakajima, Marius Kloft

Bayesian approaches such as Bayesian Neural Networks (BNNs) so far have a limited form of transparency (model transparency) already built-in through their prior weight distribution, but notably, they lack explanations of their predictions for given instances.

Decision Making Explainable Artificial Intelligence (XAI)

NoiseGrad: Enhancing Explanations by Introducing Stochasticity to Model Weights

2 code implementations18 Jun 2021 Kirill Bykov, Anna Hedström, Shinichi Nakajima, Marina M. -C. Höhne

For local explanation, stochasticity is known to help: a simple method, called SmoothGrad, has improved the visual quality of gradient-based attribution by adding noise to the input space and averaging the explanations of the noisy inputs.

Decision Making

Optimal Sampling Density for Nonparametric Regression

no code implementations25 May 2021 Danny Panknin, Klaus Robert Müller, Shinichi Nakajima

Assuming that a small number of initial samples are available, we derive the optimal training density that minimizes the generalization error of local polynomial smoothing (LPS) with its kernel bandwidth tuned locally: We adopt the mean integrated squared error (MISE) as a generalization criterion, and use the asymptotic behavior of the MISE as well as the locally optimal bandwidths (LOB) - the bandwidth function that minimizes MISE in the asymptotic limit.

Active Learning regression

Langevin Cooling for Domain Translation

1 code implementation31 Aug 2020 Vignesh Srinivasan, Klaus-Robert Müller, Wojciech Samek, Shinichi Nakajima

Domain translation is the task of finding correspondence between two domains.

Translation

Estimation of Thermodynamic Observables in Lattice Field Theories with Deep Generative Models

no code implementations14 Jul 2020 Kim A. Nicoli, Christopher J. Anders, Lena Funcke, Tobias Hartung, Karl Jansen, Pan Kessel, Shinichi Nakajima, Paolo Stornati

In this work, we demonstrate that applying deep generative machine learning models for lattice field theory is a promising route for solving problems where Markov Chain Monte Carlo (MCMC) methods are problematic.

BIG-bench Machine Learning

How Much Can I Trust You? -- Quantifying Uncertainties in Explaining Neural Networks

1 code implementation16 Jun 2020 Kirill Bykov, Marina M. -C. Höhne, Klaus-Robert Müller, Shinichi Nakajima, Marius Kloft

Explainable AI (XAI) aims to provide interpretations for predictions made by learning machines, such as deep neural networks, in order to make the machines more transparent for the user and furthermore trustworthy also for applications in e. g. safety-critical areas.

Explainable Artificial Intelligence (XAI)

Higher-Order Explanations of Graph Neural Networks via Relevant Walks

no code implementations5 Jun 2020 Thomas Schnake, Oliver Eberle, Jonas Lederer, Shinichi Nakajima, Kristof T. Schütt, Klaus-Robert Müller, Grégoire Montavon

In this paper, we show that GNNs can in fact be naturally explained using higher-order expansions, i. e. by identifying groups of edges that jointly contribute to the prediction.

Image Classification Sentiment Analysis

Automatic Identification of Types of Alterations in Historical Manuscripts

no code implementations20 Mar 2020 David Lassner, Anne Baillot, Sergej Dogadov, Klaus-Robert Müller, Shinichi Nakajima

In addition to the findings based on the digital scholarly edition Berlin Intellectuals, we present a general framework for the analysis of text genesis that can be used in the context of other digital resources representing document variants.

BIG-bench Machine Learning

Polynomial-Time Exact MAP Inference on Discrete Models with Global Dependencies

no code implementations27 Dec 2019 Alexander Bauer, Shinichi Nakajima

Considering the worst-case scenario, junction tree algorithm remains the most general solution for exact MAP inference with polynomial run-time guarantees.

Structured Prediction

Asymptotically unbiased estimation of physical observables with neural samplers

no code implementations29 Oct 2019 Kim A. Nicoli, Shinichi Nakajima, Nils Strodthoff, Wojciech Samek, Klaus-Robert Müller, Pan Kessel

We propose a general framework for the estimation of observables with generative neural samplers focusing on modern deep generative neural networks that provide an exact sampling probability.

Black-Box Decision based Adversarial Attack with Symmetric $α$-stable Distribution

no code implementations11 Apr 2019 Vignesh Srinivasan, Ercan E. Kuruoglu, Klaus-Robert Müller, Wojciech Samek, Shinichi Nakajima

Many existing methods employ Gaussian random variables for exploring the data space to find the most adversarial (for attacking) or least adversarial (for defense) point.

Adversarial Attack

Comment on "Solving Statistical Mechanics Using VANs": Introducing saVANt - VANs Enhanced by Importance and MCMC Sampling

no code implementations26 Mar 2019 Kim Nicoli, Pan Kessel, Nils Strodthoff, Wojciech Samek, Klaus-Robert Müller, Shinichi Nakajima

In this comment on "Solving Statistical Mechanics Using Variational Autoregressive Networks" by Wu et al., we propose a subtle yet powerful modification of their approach.

Local Function Complexity for Active Learning via Mixture of Gaussian Processes

no code implementations27 Feb 2019 Danny Panknin, Stefan Chmiela, Klaus-Robert Müller, Shinichi Nakajima

Inhomogeneities in real-world data, e. g., due to changes in the observation noise level or variations in the structural complexity of the source function, pose a unique set of challenges for statistical inference.

Active Learning GPR +1

Optimizing for Measure of Performance in Max-Margin Parsing

no code implementations5 Sep 2017 Alexander Bauer, Shinichi Nakajima, Nico Görnitz, Klaus-Robert Müller

Many statistical learning problems in the area of natural language processing including sequence tagging, sequence segmentation and syntactic parsing has been successfully approached by means of structured prediction methods.

Constituency Parsing Structured Prediction

Minimizing Trust Leaks for Robust Sybil Detection

no code implementations ICML 2017 János Höner, Shinichi Nakajima, Alexander Bauer, Klaus-Robert Müller, Nico Görnitz

Sybil detection is a crucial task to protect online social networks (OSNs) against intruders who try to manipulate automatic services provided by OSNs to their customers.

SynsetRank: Degree-adjusted Random Walk for Relation Identification

no code implementations2 Sep 2016 Shinichi Nakajima, Sebastian Krause, Dirk Weissenborn, Sven Schmeier, Nico Goernitz, Feiyu Xu

In relation extraction, a key process is to obtain good detectors that find relevant sentences describing the target relation.

Relation Relation Extraction

Condition for Perfect Dimensionality Recovery by Variational Bayesian PCA

1 code implementation15 Dec 2015 Shinichi Nakajima, Ryota Tomioka, Masashi Sugiyama, S. Derin Babacan

In this paper, we clarify the behavior of VB learning in probabilistic PCA (or fully-observed matrix factorization).

Sparse Probit Linear Mixed Model

no code implementations16 Jul 2015 Stephan Mandt, Florian Wenzel, Shinichi Nakajima, John P. Cunningham, Christoph Lippert, Marius Kloft

Formulated as models for linear regression, LMMs have been restricted to continuous phenotypes.

feature selection

Analysis of Variational Bayesian Latent Dirichlet Allocation: Weaker Sparsity Than MAP

no code implementations NeurIPS 2014 Shinichi Nakajima, Issei Sato, Masashi Sugiyama, Kazuho Watanabe, Hiroko Kobayashi

Latent Dirichlet allocation (LDA) is a popular generative model of various objects such as texts and images, where an object is expressed as a mixture of latent topics.

Parametric Task Learning

no code implementations NeurIPS 2013 Ichiro Takeuchi, Tatsuya Hongo, Masashi Sugiyama, Shinichi Nakajima

We introduce a novel formulation of multi-task learning (MTL) called parametric task learning (PTL) that can systematically handle infinitely many tasks parameterized by a continuous parameter.

Multi-Task Learning regression

Global Solver and Its Efficient Approximation for Variational Bayesian Low-rank Subspace Clustering

no code implementations NeurIPS 2013 Shinichi Nakajima, Akiko Takeda, S. Derin Babacan, Masashi Sugiyama, Ichiro Takeuchi

However, Bayesian learning is often obstructed by computational difficulty: the rigorous Bayesian learning is intractable in many models, and its variational Bayesian (VB) approximation is prone to suffer from local minima.

Clustering Computational Efficiency

Perfect Dimensionality Recovery by Variational Bayesian PCA

no code implementations NeurIPS 2012 Shinichi Nakajima, Ryota Tomioka, Masashi Sugiyama, S. D. Babacan

The variational Bayesian (VB) approach is one of the best tractable approximations to the Bayesian estimation, and it was demonstrated to perform well in many applications.

Probabilistic Low-Rank Subspace Clustering

no code implementations NeurIPS 2012 S. D. Babacan, Shinichi Nakajima, Minh Do

In this paper, we consider the problem of clustering data points into low-dimensional subspaces in the presence of outliers.

Clustering Density Estimation

Global Solution of Fully-Observed Variational Bayesian Matrix Factorization is Column-Wise Independent

no code implementations NeurIPS 2011 Shinichi Nakajima, Masashi Sugiyama, S. D. Babacan

A recent study on fully-observed VBMF showed that, under a stronger assumption that the two factorized matrices are column-wise independent, the global optimal solution can be analytically computed.

Global Analytic Solution for Variational Bayesian Matrix Factorization

no code implementations NeurIPS 2010 Shinichi Nakajima, Masashi Sugiyama, Ryota Tomioka

Bayesian methods of matrix factorization (MF) have been actively explored recently as promising alternatives to classical singular value decomposition.

Cannot find the paper you are looking for? You can Submit a new open access paper.