Search Results for author: Tomer Levinboim

Found 11 papers, 3 papers with code

Quality Estimation for Image Captions Based on Large-scale Human Evaluations

1 code implementation NAACL 2021 Tomer Levinboim, Ashish V. Thapliyal, Piyush Sharma, Radu Soricut

Automatic image captioning has improved significantly over the last few years, but the problem is far from being solved, with state of the art models still often producing low quality captions when used in the wild.

Image Captioning Model Selection

PACTran: PAC-Bayesian Metrics for Estimating the Transferability of Pretrained Models to Classification Tasks

1 code implementation10 Mar 2022 Nan Ding, Xi Chen, Tomer Levinboim, Beer Changpinyo, Radu Soricut

With the increasing abundance of pretrained models in recent years, the problem of selecting the best pretrained checkpoint for a particular downstream classification task has been gaining increased attention.

Learning Theory Model Selection +2

CausalLM is not optimal for in-context learning

1 code implementation14 Aug 2023 Nan Ding, Tomer Levinboim, Jialin Wu, Sebastian Goodman, Radu Soricut

Recent empirical evidence indicates that transformer based in-context learning performs better when using a prefix language model (prefixLM), in which in-context samples can all attend to each other, compared to causal language models (causalLM), which use auto-regressive attention that prohibits in-context samples to attend to future samples.

In-Context Learning Language Modelling

Informative Image Captioning with External Sources of Information

no code implementations ACL 2019 Sanqiang Zhao, Piyush Sharma, Tomer Levinboim, Radu Soricut

An image caption should fluently present the essential information in a given image, including informative, fine-grained entity mentions and the manner in which these entities interact.

Image Captioning Informativeness

Reinforcing an Image Caption Generator Using Off-Line Human Feedback

no code implementations21 Nov 2019 Paul Hongsuck Seo, Piyush Sharma, Tomer Levinboim, Bohyung Han, Radu Soricut

Human ratings are currently the most accurate way to assess the quality of an image captioning model, yet most often the only used outcome of an expensive human rating evaluation is a few overall statistics over the evaluation dataset.

Image Captioning

Improving Text Generation Evaluation with Batch Centering and Tempered Word Mover Distance

no code implementations EMNLP (Eval4NLP) 2020 Xi Chen, Nan Ding, Tomer Levinboim, Radu Soricut

Recent advances in automatic evaluation metrics for text have shown that deep contextualized word representations, such as those generated by BERT encoders, are helpful for designing metrics that correlate well with human judgements.

Text Generation

Bridging the Gap Between Practice and PAC-Bayes Theory in Few-Shot Meta-Learning

no code implementations NeurIPS 2021 Nan Ding, Xi Chen, Tomer Levinboim, Sebastian Goodman, Radu Soricut

Despite recent advances in its theoretical understanding, there still remains a significant gap in the ability of existing PAC-Bayesian theories on meta-learning to explain performance improvements in the few-shot learning setting, where the number of training examples in the target tasks is severely limited.

Few-Shot Learning

Improving Robust Generalization by Direct PAC-Bayesian Bound Minimization

no code implementations CVPR 2023 Zifan Wang, Nan Ding, Tomer Levinboim, Xi Chen, Radu Soricut

Recent research in robust optimization has shown an overfitting-like phenomenon in which models trained against adversarial attacks exhibit higher robustness on the training set compared to the test set.

Adversarial Robustness

Cannot find the paper you are looking for? You can Submit a new open access paper.