Search Results for author: Ukyo Honda

Found 8 papers, 4 papers with code

Removing Word-Level Spurious Alignment between Images and Pseudo-Captions in Unsupervised Image Captioning

1 code implementation EACL 2021 Ukyo Honda, Yoshitaka Ushiku, Atsushi Hashimoto, Taro Watanabe, Yuji Matsumoto

Unsupervised image captioning is a challenging task that aims at generating captions without the supervision of image-sentence pairs, but only with images and sentences drawn from different sources and object labels detected from the images.

Image Captioning image-sentence alignment +2

Switching to Discriminative Image Captioning by Relieving a Bottleneck of Reinforcement Learning

1 code implementation6 Dec 2022 Ukyo Honda, Taro Watanabe, Yuji Matsumoto

Discriminativeness is a desirable feature of image captions: captions should describe the characteristic details of input images.

Image Captioning reinforcement-learning +1

Pruning Basic Elements for Better Automatic Evaluation of Summaries

no code implementations NAACL 2018 Ukyo Honda, Tsutomu Hirao, Masaaki Nagata

We propose a simple but highly effective automatic evaluation measure of summarization, pruned Basic Elements (pBE).

Word Embeddings Word Similarity

On the Depth between Beam Search and Exhaustive Search for Text Generation

no code implementations25 Aug 2023 Yuu Jinnai, Tetsuro Morimura, Ukyo Honda

To this end, we introduce Lookahead Beam Search (LBS), a multi-step lookahead search that optimizes the objective considering a fixed number of future steps.

Machine Translation Text Generation +2

Model-Based Minimum Bayes Risk Decoding

no code implementations9 Nov 2023 Yuu Jinnai, Tetsuro Morimura, Ukyo Honda, Kaito Ariu, Kenshi Abe

MBR decoding selects a hypothesis from a pool of hypotheses that has the least expected risk under a probability model according to a given utility function.

Text Generation

Generating Diverse and High-Quality Texts by Minimum Bayes Risk Decoding

1 code implementation10 Jan 2024 Yuu Jinnai, Ukyo Honda, Tetsuro Morimura, Peinan Zhang

We propose two variants of MBR, Diverse MBR (DMBR) and $k$-medoids MBR (KMBR), methods to generate a set of sentences with high quality and diversity.

Language Modelling Large Language Model +1

A Single Linear Layer Yields Task-Adapted Low-Rank Matrices

no code implementations22 Mar 2024 Hwichan Kim, Shota Sasaki, Sho Hoshino, Ukyo Honda

To confirm this hypothesis, we devise a method named Conditionally Parameterized LoRA (CondLoRA) that updates initial weight matrices with low-rank matrices derived from a single linear layer.

Cannot find the paper you are looking for? You can Submit a new open access paper.