Search Results for author: Tal Schuster

Found 13 papers, 9 papers with code

ExT5: Towards Extreme Multi-Task Scaling for Transfer Learning

2 code implementations22 Nov 2021 Vamsi Aribandi, Yi Tay, Tal Schuster, Jinfeng Rao, Huaixiu Steven Zheng, Sanket Vaibhav Mehta, Honglei Zhuang, Vinh Q. Tran, Dara Bahri, Jianmo Ni, Jai Gupta, Kai Hui, Sebastian Ruder, Donald Metzler

Despite the recent success of multi-task learning and transfer learning for natural language processing (NLP), few works have systematically studied the effect of scaling up the number of tasks during pre-training.

Denoising Multi-Task Learning

Programming Puzzles

1 code implementation10 Jun 2021 Tal Schuster, Ashwin Kalyan, Oleksandr Polozov, Adam Tauman Kalai

The dataset is comprehensive in that it spans problems of a range of difficulties and domains, ranging from trivial string manipulation problems, to classic programming puzzles (e. g., Tower of Hanoi), to interview/competitive-programming problems (e. g., dynamic programming), to longstanding open problems in algorithms and mathematics (e. g., factoring).

Code Generation Natural Language Understanding +1

Consistent Accelerated Inference via Confident Adaptive Transformers

no code implementations EMNLP 2021 Tal Schuster, Adam Fisch, Tommi Jaakkola, Regina Barzilay

In this work, we present CATs -- Confident Adaptive Transformers -- in which we simultaneously increase computational efficiency, while guaranteeing a specifiable degree of consistency with the original model with high confidence.

Few-shot Conformal Prediction with Auxiliary Tasks

1 code implementation17 Feb 2021 Adam Fisch, Tal Schuster, Tommi Jaakkola, Regina Barzilay

We develop a novel approach to conformal prediction when the target task has limited data available for training.

Drug Discovery Meta-Learning

Efficient Conformal Prediction via Cascaded Inference with Expanded Admission

1 code implementation ICLR 2021 Adam Fisch, Tal Schuster, Tommi Jaakkola, Regina Barzilay

This set is guaranteed to contain a correct answer with high probability, and is well-suited for many open-ended classification tasks.

Drug Discovery

Distilling the Evidence to Augment Fact Verification Models

no code implementations WS 2020 Beatrice Portelli, Jason Zhao, Tal Schuster, Giuseppe Serra, Enrico Santus

We propose, instead, a model-agnostic framework that consists of two modules: (1) a span extractor, which identifies the crucial information connecting claim and evidence; and (2) a classifier that combines claim, evidence, and the extracted spans to predict the veracity of the claim.

Fact Verification

Humpty Dumpty: Controlling Word Meanings via Corpus Poisoning

no code implementations14 Jan 2020 Roei Schuster, Tal Schuster, Yoav Meri, Vitaly Shmatikov

Word embeddings, i. e., low-dimensional vector representations such as GloVe and SGNS, encode word "meaning" in the sense that distances between words' vectors correspond to their semantic proximity.

Data Poisoning Information Retrieval +4

Automatic Fact-guided Sentence Modification

3 code implementations30 Sep 2019 Darsh J Shah, Tal Schuster, Regina Barzilay

This is a challenging constrained generation task, as the output must be consistent with the new information and fit into the rest of the existing document.

Fact Checking

The Limitations of Stylometry for Detecting Machine-Generated Fake News

no code implementations CL 2020 Tal Schuster, Roei Schuster, Darsh J Shah, Regina Barzilay

Recent developments in neural language models (LMs) have raised concerns about their potential misuse for automatically spreading misinformation.

Fake News Detection Language Modelling +1

Optical Flow Requires Multiple Strategies (but only one network)

1 code implementation CVPR 2017 Tal Schuster, Lior Wolf, David Gadot

This type of training produces a network that displays multiple strategies depending on the input and leads to state of the art results on the KITTI 2012 and KITTI 2015 optical flow benchmarks.

Metric Learning Optical Flow Estimation

Cannot find the paper you are looking for? You can Submit a new open access paper.