In this paper, we introduce Joint Query Intent Understanding (JointMap), a deep learning model to simultaneously learn two different high-level user intent tasks: 1) identifying a query's commercial vs. non-commercial intent, and 2) associating a set of relevant product categories in taxonomy to a product query.
The approach we adopt is one of active learning, i. e., incrementally selecting a set of samples that need supervision based on the current model, obtaining supervision for these samples, retraining the model with the additional set of supervised samples and proceeding again to select the next set of samples.
To this end, we introduce a discriminative active learning approach for domain adaptation to reduce the efforts of data annotation.
Active learning is a framework in which the learning machine can select the samples to be used for training.
Existing approaches to active learning maximize the system performance by sampling unlabeled instances for annotation that yield the most efficient training.
Segmentation is a prerequisite yet challenging task for medical image analysis.
At this scale, the fluid properties are affected by nanoconfinement effects due to the increased fluid-solid interactions.