Position
681 papers with code • 0 benchmarks • 0 datasets
Benchmarks
These leaderboards are used to track progress in Position
Libraries
Use these libraries to find Position models and implementationsMost implemented papers
Neural Question Generation from Text: A Preliminary Study
Automatic question generation aims to generate questions from a text passage where the generated questions can be answered by certain sub-spans of the given passage.
Learning to Paint With Model-based Deep Reinforcement Learning
We show how to teach machines to paint like human painters, who can use a small number of strokes to create fantastic paintings.
MPNet: Masked and Permuted Pre-training for Language Understanding
Since BERT neglects dependency among predicted tokens, XLNet introduces permuted language modeling (PLM) for pre-training to address this problem.
An Attention Free Transformer
We introduce Attention Free Transformer (AFT), an efficient variant of Transformers that eliminates the need for dot product self attention.
Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation
In this paper, we attempt to remove this constraint by factorizing 2D self-attention into two 1D self-attentions.
Spelling Error Correction with Soft-Masked BERT
A state-of-the-art method for the task selects a character from a list of candidates for correction (including non-correction) at each position of the sentence on the basis of BERT, the language representation model.
A More Fine-Grained Aspect-Sentiment-Opinion Triplet Extraction Task
Aspect Sentiment Triplet Extraction (ASTE) aims to extract aspect term, sentiment and opinion term triplets from sentences and tries to provide a complete solution for aspect-based sentiment analysis (ABSA).
Mega: Moving Average Equipped Gated Attention
The design choices in the Transformer attention mechanism, including weak inductive bias and quadratic computational complexity, have limited its application for modeling long sequences.
A Length-Extrapolatable Transformer
Position modeling plays a critical role in Transformers.
Drag Your GAN: Interactive Point-based Manipulation on the Generative Image Manifold
Synthesizing visual content that meets users' needs often requires flexible and precise controllability of the pose, shape, expression, and layout of the generated objects.