We evaluate the performance of the proposed model on three tasks, including code question answering, code clone detection and code refinement.
To alleviate feature suppression, we propose contrastive learning for unsupervised sentence embedding with soft negative samples (SNCSE).
To achieve such an ambitious goal, new mechanisms for foreign pronunciation generation and language model (LM) enrichment have been devised.
The inheritance part is learned with a similarity loss to transfer the existing learned knowledge from the teacher model to the student model, while the exploration part is encouraged to learn representations different from the inherited ones with a dis-similarity loss.
Specifically, we introduce Gait recognition as an auxiliary task to drive the Image ReID model to learn cloth-agnostic representations by leveraging personal unique and cloth-independent gait information, we name this framework as GI-ReID.
Second, different body parts possess different scales, and even the same part in different frames can appear at different locations and scales.
We tackle implicit discourse relation classification, a task of automatically determining semantic relationships between arguments.
The topology of the adjacency graph is a key factor for modeling the correlations of the input skeletons.
Embedding nodes of a large network into a metric (e. g., Euclidean) space has become an area of active research in statistical machine learning, which has found applications in natural and social sciences.
no code implementations • 3 Jun 2020 • Zhi Shiuh Lim, Changjian Li, Zhen Huang, Xiao Chi, Jun Zhou, Shengwei Zeng, Ganesh Ji Omar, Yuan Ping Feng, Andrivo Rusydi, Stephen John Pennycook, Thirumalai Venkatesan, Ariando Ariando
Here, the emergence, tuning and interpretation of hump-shape Hall Effect from a CaMnO3/CaIrO3/CaMnO3 trilayer structure are studied in detail.
Mesoscale and Nanoscale Physics
We present the detailed mathematical construction of our method.
However, when the translation task involves Chinese, semantic granularity remains at the word and character level, so there is still need more fine-grained translation model of Chinese.
The most popular way to train very deep CNNs is to use shortcut connections (SC) together with batch normalization (BN).
Rapid progress has been made in the field of reading comprehension and question answering, where several systems have achieved human parity in some simplified settings.
Ranked #5 on Question Answering on DROP Test
This paper considers the reading comprehension task in which multiple documents are given as input.
Open-domain targeted sentiment analysis aims to detect opinion targets along with their sentiment polarities from a sentence.
Despite that current reading comprehension systems have achieved significant advancements, their promising performances are often obtained at the cost of making an ensemble of numerous models.
Machine reading comprehension with unanswerable questions aims to abstain from answering when no answer can be inferred.
Ranked #12 on Question Answering on SQuAD2.0 dev
An effective technique for filtering free-rider episodes is using a partition model to divide an episode into two consecutive subepisodes and comparing the observed support of such episode with its expected support under the assumption that these two subepisodes occur independently.
Enforcing open source licenses such as the GNU General Public License (GPL), analyzing a binary for possible vulnerabilities, and code maintenance are all situations where it is useful to be able to determine the source code provenance of a binary.
Cryptography and Security D.4.6
In this paper, we introduce the Reinforced Mnemonic Reader for machine reading comprehension tasks, which enhances previous attentive readers in two aspects.
Ranked #17 on Question Answering on TriviaQA
We propose a multi-objective framework to learn both secondary targets not directly related to the intended task of speech enhancement (SE) and the primary target of the clean log-power spectra (LPS) features to be used directly for constructing the enhanced speech signals.
We present a Bayesian approach to adapting parameters of a well-trained context-dependent, deep-neural-network, hidden Markov model (CD-DNN-HMM) to improve automatic speech recognition performance.
Feature selection has attracted significant attention in data mining and machine learning in the past decades.