APIRecX can migrate the knowledge of existing libraries to a new library, and can recommend APIs that are previously regarded as OOV.
In this paper, we propose a mixture model-based end-to-end method to model the syntactic-semantic dependency correlation in Semantic Role Labeling (SRL).
Specifically, we target the widely-used application scenario of image classification, and utilized three different datasets and five commonly-used performance metrics to assess in total 13 methods from diverse categories.
To alleviate the expensive human labeling, semi-supervised semantic segmentation employs a few labeled images and an abundant of unlabeled images to predict the pixel-level label map with the same size.
Our proposed method is significantly superior in terms of UAR and F1 to the single-task and multi-task baselines with p-values < 0. 05.
Experimental results demonstrate that ViTScore can better evaluate the image semantic similarity than the other 3 typical metrics, which indicates that ViTScore is an effective performance metric when deployed in SC scenarios.
To mitigate bias for code generation models, we evaluate five bias mitigation prompt strategies, i. e., utilizing bias testing results to refine the code (zero-shot), one-, few-shot, and two Chain-of-Thought (CoT) prompts.
This addresses the issue of unseen log events in training data, enhancing log representation.
Affordance learning considers the interaction opportunities for an actor in the scene and thus has wide application in scene understanding and intelligent robotics.
Specifically, we employ an asymmetric encoder to learn the compensating features of the RGB and the thermal images.
Ranked #15 on Thermal Image Segmentation on MFN Dataset
Such attribute combinations are substantial clues to the underlying root causes and thus are called root causes of multidimensional data.
Although many feature extraction methods have been proposed to improve the performance of enhancer identification, they cannot learn position-related multiscale contextual information from raw DNA sequences.
Proposal segmentation allows proposal-pixel similarity transfer from base classes to novel classes, which enables the mask learning of novel classes.
A novel method that realizes the feature-level fusion under the bird's-eye view (BEV) for a better feature representation is proposed.
Specifically, the ability of using mask prior to help detect objects is learned from base categories and transferred to novel categories.
Through experiments we find that, without regression, the performance could be equally promising as long as we delicately design the network to suit the training objective.
Emotional Voice Conversion (EVC) aims to convert the emotional style of a source speech signal to a target style while preserving its content and speaker identity information.
To achieve a profound understanding of how far we are from solving this problem and provide suggestions to future research, in this paper, we conduct a systematic and in-depth analysis of 5 state-of-the-art neural code summarization models on 6 widely used BLEU variants, 4 pre-processing operations and their combinations, and 3 widely used datasets.
In this setting, we propose a method called SimTrans to transfer pairwise semantic similarity from base categories to novel categories.
no code implementations • • Akiko Aizawa, Frederic Bergeron, Junjie Chen, Fei Cheng, Katsuhiko Hayashi, Kentaro Inui, Hiroyoshi Ito, Daisuke Kawahara, Masaru Kitsuregawa, Hirokazu Kiyomaru, Masaki Kobayashi, Takashi Kodama, Sadao Kurohashi, Qianying Liu, Masaki Matsubara, Yusuke Miyao, Atsuyuki Morishima, Yugo Murawaki, Kazumasa Omura, Haiyue Song, Eiichiro Sumita, Shinji Suzuki, Ribeka Tanaka, Yu Tanaka, Masashi Toyoda, Nobuhiro Ueda, Honai Ueoka, Masao Utiyama, Ying Zhong
The global pandemic of COVID-19 has made the public pay close attention to related news, covering various domains, such as sanitation, treatment, and effects on education.
Our experimental results show that the PVBD scheme can correctly detect the existence of the hidden information at both the 2LQR code and the LCAC 2D barcode.
However, crawled web images usually have two types of noises, label noise and background noise, which induce extra difficulties in utilizing them effectively.
Ranked #14 on Image Classification on WebVision-1000