Search Results for author: Yuma Ichikawa

Found 5 papers, 0 papers with code

Continuous Tensor Relaxation for Finding Diverse Solutions in Combinatorial Optimization Problems

no code implementations3 Feb 2024 Yuma Ichikawa, Hiroaki Iwashita

However, a single solution may not be suitable in practical scenarios, as the objective functions and constraints are only approximations of original real-world situations.

Combinatorial Optimization

Learning Dynamics in Linear VAE: Posterior Collapse Threshold, Superfluous Latent Space Pitfalls, and Speedup with KL Annealing

no code implementations24 Oct 2023 Yuma Ichikawa, Koji Hukushima

To mitigate this problem, an adjustable hyperparameter $\beta$ and a strategy for annealing this parameter, called KL annealing, are proposed.

Representation Learning

Controlling Continuous Relaxation for Combinatorial Optimization

no code implementations29 Sep 2023 Yuma Ichikawa

In addition, these solvers employ a continuous relaxation strategy; thus, post-learning rounding from the continuous space back to the original discrete space is required, undermining the robustness of the results.

Combinatorial Optimization

Dataset Size Dependence of Rate-Distortion Curve and Threshold of Posterior Collapse in Linear VAE

no code implementations14 Sep 2023 Yuma Ichikawa, Koji Hukushima

This paper presents a closed-form expression to assess the relationship between the beta in VAE, the dataset size, the posterior collapse, and the rate-distortion curve by analyzing a minimal VAE in a high-dimensional limit.

Representation Learning

Toward Unlimited Self-Learning MCMC with Parallel Adaptive Annealing

no code implementations25 Nov 2022 Yuma Ichikawa, Akira Nakagawa, Hiromoto Masayuki, Yuhei Umeda

However, SLMC methods are difficult to directly apply to multimodal distributions for which training data are difficult to obtain.

Self-Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.