8k

62 papers with code • 0 benchmarks • 0 datasets

This task has no description! Would you like to contribute one?

Most implemented papers

Seeing Things from a Different Angle:Discovering Diverse Perspectives about Claims

CogComp/perspectrum NAACL 2019

Inherently, this is a natural language understanding task, and we propose to address it as such.

Seeing Things from a Different Angle: Discovering Diverse Perspectives about Claims

CogComp/perspectrum 8 Jun 2019

Inherently, this is a natural language understanding task, and we propose to address it as such.

FISR: Deep Joint Frame Interpolation and Super-Resolution with a Multi-scale Temporal Loss

JihyongOh/FISR 16 Dec 2019

In this paper, we first propose a joint VFI-SR framework for up-scaling the spatio-temporal resolution of videos from 2K 30 fps to 4K 60 fps.

Span-ConveRT: Few-shot Span Extraction for Dialog with Pretrained Conversational Representations

PolyAI-LDN/task-specific-datasets ACL 2020

We introduce Span-ConveRT, a light-weight model for dialog slot-filling which frames the task as a turn-based span extraction task.

Aspect-based Sentiment Analysis of Scientific Reviews

Souvic/aspect-based-sentiment-analysis-of-scientific-reviews-jcdl 5 Jun 2020

We also investigate the extent of disagreement between the reviewers and the chair and find that the inter-reviewer disagreement may have a link to the disagreement with the chair.

1-bit LAMB: Communication Efficient Large-Scale Large-Batch Training with LAMB's Convergence Speed

microsoft/DeepSpeed 13 Apr 2021

To this end, we design a new communication-efficient algorithm, 1-bit LAMB, which introduces a novel way to support adaptive layerwise learning rates under compression.

Timbre Transfer with Variational Auto Encoding and Cycle-Consistent Adversarial Networks

RussellSB/tt-vae-gan 5 Sep 2021

This research project investigates the application of deep learning to timbre transfer, where the timbre of a source audio can be converted to the timbre of a target audio with minimal loss in quality.

Transformer Quality in Linear Time

lucidrains/FLASH-pytorch 21 Feb 2022

We revisit the design choices in Transformers, and propose methods to address their weaknesses in handling long sequences.