no code implementations • 1 May 2024 • Masanari Kimura, Hiroki Naganuma
One of the simplest techniques to tackle this task is focal loss, a generalization of cross-entropy by introducing one positive parameter.
no code implementations • 30 Mar 2024 • Yuji Naraki, Ryosuke Yamaki, Yoshikazu Ikeda, Takafumi Horie, Hiroki Naganuma
In the field of Natural Language Processing (NLP), Named Entity Recognition (NER) is recognized as a critical technology, employed across a wide array of applications.
1 code implementation • 31 Jan 2024 • Kotaro Yoshida, Hiroki Naganuma
This approach should move beyond traditional metrics, such as accuracy and F1 scores, which fail to account for the model's degree of over-confidence, and instead focus on the nuanced interplay between accuracy, calibration, and model invariance.
1 code implementation • 17 Jul 2023 • Hiroki Naganuma, Ryuichiro Hataya, Ioannis Mitliagkas
In out-of-distribution (OOD) generalization tasks, fine-tuning pre-trained models has become a prevalent strategy.
1 code implementation • 20 Jun 2023 • Charles Guille-Escuret, Hiroki Naganuma, Kilian Fatras, Ioannis Mitliagkas
Understanding the optimization dynamics of neural networks is necessary for closing the gap between theory and practice.
1 code implementation • 15 Nov 2022 • Hiroki Naganuma, Kartik Ahuja, Shiro Takagi, Tetsuya Motokawa, Rio Yokota, Kohta Ishikawa, Ikuro Sato, Ioannis Mitliagkas
Modern deep learning systems do not generalize well when the test data distribution is slightly different to the training data distribution.
no code implementations • 22 Jun 2022 • Kilian Fatras, Hiroki Naganuma, Ioannis Mitliagkas
It is common in computer vision to be confronted with domain shift: images which have the same class but different acquisition conditions.
1 code implementation • 28 Mar 2022 • Hiroki Naganuma, Hideaki Iiduka
Since data distribution is unknown, generative adversarial networks (GANs) formulate this problem as a game between two models, a generator and a discriminator.
no code implementations • 29 Sep 2021 • Hiroki Naganuma, Taiji Suzuki, Rio Yokota, Masahiro Nomura, Kohta Ishikawa, Ikuro Sato
Generalization measures are intensively studied in the machine learning community for better modeling generalization gaps.