Position

681 papers with code • 0 benchmarks • 0 datasets

This task has no description! Would you like to contribute one?

Libraries

Use these libraries to find Position models and implementations

Most implemented papers

Neural Question Generation from Text: A Preliminary Study

magic282/NQG 6 Apr 2017

Automatic question generation aims to generate questions from a text passage where the generated questions can be answered by certain sub-spans of the given passage.

Learning to Paint With Model-based Deep Reinforcement Learning

hzwer/ICCV2019-LearningToPaint ICCV 2019

We show how to teach machines to paint like human painters, who can use a small number of strokes to create fantastic paintings.

MPNet: Masked and Permuted Pre-training for Language Understanding

microsoft/MPNet NeurIPS 2020

Since BERT neglects dependency among predicted tokens, XLNet introduces permuted language modeling (PLM) for pre-training to address this problem.

An Attention Free Transformer

labmlai/annotated_deep_learning_paper_implementations 28 May 2021

We introduce Attention Free Transformer (AFT), an efficient variant of Transformers that eliminates the need for dot product self attention.

Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation

google-research/deeplab2 ECCV 2020

In this paper, we attempt to remove this constraint by factorizing 2D self-attention into two 1D self-attentions.

Spelling Error Correction with Soft-Masked BERT

gitabtion/BertBasedCorrectionModels ACL 2020

A state-of-the-art method for the task selects a character from a list of candidates for correction (including non-correction) at each position of the sentence on the basis of BERT, the language representation model.

A More Fine-Grained Aspect-Sentiment-Opinion Triplet Extraction Task

l294265421/ASOTE 29 Mar 2021

Aspect Sentiment Triplet Extraction (ASTE) aims to extract aspect term, sentiment and opinion term triplets from sentences and tries to provide a complete solution for aspect-based sentiment analysis (ABSA).

Mega: Moving Average Equipped Gated Attention

facebookresearch/mega 21 Sep 2022

The design choices in the Transformer attention mechanism, including weak inductive bias and quadratic computational complexity, have limited its application for modeling long sequences.

A Length-Extrapolatable Transformer

microsoft/torchscale 20 Dec 2022

Position modeling plays a critical role in Transformers.

Drag Your GAN: Interactive Point-based Manipulation on the Generative Image Manifold

XingangPan/DragGAN 18 May 2023

Synthesizing visual content that meets users' needs often requires flexible and precise controllability of the pose, shape, expression, and layout of the generated objects.