Search Results for author: Sebastian Goodman

Found 13 papers, 7 papers with code

CausalLM is not optimal for in-context learning

1 code implementation14 Aug 2023 Nan Ding, Tomer Levinboim, Jialin Wu, Sebastian Goodman, Radu Soricut

Recent empirical evidence indicates that transformer based in-context learning performs better when using a prefix language model (prefixLM), in which in-context samples can all attend to each other, compared to causal language models (causalLM), which use auto-regressive attention that prohibits in-context samples to attend to future samples.

In-Context Learning Language Modelling

PreSTU: Pre-Training for Scene-Text Understanding

no code implementations ICCV 2023 Jihyung Kil, Soravit Changpinyo, Xi Chen, Hexiang Hu, Sebastian Goodman, Wei-Lun Chao, Radu Soricut

The ability to recognize and reason about text embedded in visual inputs is often lacking in vision-and-language (V&L) models, perhaps because V&L pre-training methods have often failed to include such an ability in their training objective.

Image Captioning Optical Character Recognition (OCR) +2

Bridging the Gap Between Practice and PAC-Bayes Theory in Few-Shot Meta-Learning

no code implementations NeurIPS 2021 Nan Ding, Xi Chen, Tomer Levinboim, Sebastian Goodman, Radu Soricut

Despite recent advances in its theoretical understanding, there still remains a significant gap in the ability of existing PAC-Bayesian theories on meta-learning to explain performance improvements in the few-shot learning setting, where the number of training examples in the target tasks is severely limited.

Few-Shot Learning

TeaForN: Teacher-Forcing with N-grams

no code implementations EMNLP 2020 Sebastian Goodman, Nan Ding, Radu Soricut

Sequence generation models trained with teacher-forcing suffer from issues related to exposure bias and lack of differentiability across timesteps.

Machine Translation News Summarization +1

Multi-Image Summarization: Textual Summary from a Set of Cohesive Images

no code implementations15 Jun 2020 Nicholas Trieu, Sebastian Goodman, Pradyumna Narayana, Kazoo Sone, Radu Soricut

Multi-sentence summarization is a well studied problem in NLP, while generating image descriptions for a single image is a well studied problem in Computer Vision.

Descriptive Image Captioning +2

Multi-stage Pretraining for Abstractive Summarization

no code implementations23 Sep 2019 Sebastian Goodman, Zhenzhong Lan, Radu Soricut

Neural models for abstractive summarization tend to achieve the best performance in the presence of highly specialized, summarization specific modeling add-ons such as pointer-generator, coverage-modeling, and inferencetime heuristics.

Abstractive Text Summarization

Conceptual Captions: A Cleaned, Hypernymed, Image Alt-text Dataset For Automatic Image Captioning

2 code implementations ACL 2018 Piyush Sharma, Nan Ding, Sebastian Goodman, Radu Soricut

We present a new dataset of image caption annotations, Conceptual Captions, which contains an order of magnitude more images than the MS-COCO dataset (Lin et al., 2014) and represents a wider variety of both images and image caption styles.

Image Captioning

Understanding Image and Text Simultaneously: a Dual Vision-Language Machine Comprehension Task

no code implementations22 Dec 2016 Nan Ding, Sebastian Goodman, Fei Sha, Radu Soricut

We introduce a new multi-modal task for computer systems, posed as a combined vision-language comprehension challenge: identifying the most suitable text describing a scene, given several similar options.

Image Captioning Multi-Task Learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.