Search Results for author: Hiroki Naganuma

Found 9 papers, 5 papers with code

Geometric Insights into Focal Loss: Reducing Curvature for Enhanced Model Calibration

no code implementations1 May 2024 Masanari Kimura, Hiroki Naganuma

One of the simplest techniques to tackle this task is focal loss, a generalization of cross-entropy by introducing one positive parameter.

Decision Making

Augmenting NER Datasets with LLMs: Towards Automated and Refined Annotation

no code implementations30 Mar 2024 Yuji Naraki, Ryosuke Yamaki, Yoshikazu Ikeda, Takafumi Horie, Hiroki Naganuma

In the field of Natural Language Processing (NLP), Named Entity Recognition (NER) is recognized as a critical technology, employed across a wide array of applications.

named-entity-recognition Named Entity Recognition +1

Towards Understanding Variants of Invariant Risk Minimization through the Lens of Calibration

1 code implementation31 Jan 2024 Kotaro Yoshida, Hiroki Naganuma

This approach should move beyond traditional metrics, such as accuracy and F1 scores, which fail to account for the model's degree of over-confidence, and instead focus on the nuanced interplay between accuracy, calibration, and model invariance.

Out-of-Distribution Generalization

Optimal transport meets noisy label robust loss and MixUp regularization for domain adaptation

no code implementations22 Jun 2022 Kilian Fatras, Hiroki Naganuma, Ioannis Mitliagkas

It is common in computer vision to be confronted with domain shift: images which have the same class but different acquisition conditions.

Domain Adaptation

Conjugate Gradient Method for Generative Adversarial Networks

1 code implementation28 Mar 2022 Hiroki Naganuma, Hideaki Iiduka

Since data distribution is unknown, generative adversarial networks (GANs) formulate this problem as a game between two models, a generator and a discriminator.

Takeuchi's Information Criteria as Generalization Measures for DNNs Close to NTK Regime

no code implementations29 Sep 2021 Hiroki Naganuma, Taiji Suzuki, Rio Yokota, Masahiro Nomura, Kohta Ishikawa, Ikuro Sato

Generalization measures are intensively studied in the machine learning community for better modeling generalization gaps.

Hyperparameter Optimization

Cannot find the paper you are looking for? You can Submit a new open access paper.