A second (multi-relational) GCN is then applied to the utterance states to produce a discourse relation-augmented representation for the utterances, which are then fused together with token states in each utterance as input to a dropped pronoun recovery layer.
We propose DeRenderNet, a deep neural network to decompose the albedo and latent lighting, and render shape-(in)dependent shadings, given a single image of an outdoor urban scene, trained in a self-supervised manner.
We train a deep neural network to regress intrinsic cues with physically-based constraints and use them to conduct global and local lightings estimation.
Exploratory analysis also demonstrates that the GCRF did help to capture the dependencies between pronouns in neighboring utterances, thus contributes to the performance improvements.
We anticipate that our ST-MNIST dataset will be of interest and useful to the neuromorphic and robotics research communities.
When we take photos through glass windows or doors, the transmitted background scene is often blended with undesirable reflection.
To solve this problem, we propose a prism module to disentangle the semantic aspects of words and reduce noise at the input layer of a model.
Ranked #40 on Named Entity Recognition on CoNLL 2003 (English)
For the cardiac/lung phantom, an additional cardiac gated 2D-OSEM set was reconstructed.
Pronouns are often dropped in Chinese sentences, and this happens more frequently in conversational genres as their referents can be easily understood from context.
Dropout is used to avoid overfitting by randomly dropping units from the neural networks during training.
Then, we introduce a Key Information Guide Network (KIGN), which encodes the keywords to the key information representation, to guide the process of generation.
Ranked #10 on Text Summarization on CNN / Daily Mail (Anonymized)
For Chinese word segmentation, the large-scale annotated corpora mainly focus on newswire and only a handful of annotated data is available in other domains such as patents and literature.
Detection and correction of Chinese grammatical errors have been two of major challenges for Chinese automatic grammatical error diagnosis. This paper presents an N-gram model for automatic detection and correction of Chinese grammatical errors in NLPTEA 2017 task.
For type IIB, the results of arXiv:1505. 6703 show that our candidate twisted supergravity theory admits a unique quantization in perturbation theory.
High Energy Physics - Theory Algebraic Geometry