A large portion of structured data does not yet reap the benefits of the Semantic Web.
Ranked #4 on Cell Entity Annotation on ToughTables-DBP
As KGs are symbolic constructs, specialized techniques have to be applied in order to make them compatible with data mining techniques.
2 code implementations • 15 Jan 2020 • Gilles Vandewiele, Isabelle Dehaene, György Kovács, Lucas Sterckx, Olivier Janssens, Femke Ongenae, Femke De Backere, Filip De Turck, Kristien Roelens, Johan Decruyenaere, Sofie Van Hoecke, Thomas Demeester
Information extracted from electrohysterography recordings could potentially prove to be an interesting additional source of information to estimate the risk on preterm birth.
It has been shown that classifiers are able to achieve state-of-the-art results on a plethora of datasets by taking as input distances from the input time series to different discriminative shapelets.
Ranked #3 on Outlier Detection on ECG5000
Deep-learning based techniques are increasingly being used for different machine learning tasks on knowledge graphs.
Ranked #2 on Node Classification on BGS
The presence of bacteria or fungi in the bloodstream of patients is abnormal and can lead to life-threatening conditions.
The results show that GENESIM achieves a better predictive performance on most of these data sets than decision tree induction techniques and a predictive performance in the same order of magnitude as the ensemble techniques.
In this work, we describe a surprising finding: a simple generalization of the classic count-based approach can reach near state-of-the-art performance on various high-dimensional and/or continuous deep RL benchmarks.
Ranked #1 on Atari Games on Atari 2600 Freeway
While there are methods with optimality guarantees in the setting of discrete state and action spaces, these methods cannot be applied in high-dimensional deep RL scenarios.
To improve prediction accuracy, this paper proposes: (i) Joint inference and learning by integration of back-propagation and loss-augmented inference in SSVM subgradient descent; (ii) Extending SSVM factors to neural networks that form highly nonlinear functions of input features.