no code implementations • 29 Jun 2023 • Daniel T. Chang
We discuss these in this paper, as well as major uses of LLMs for CODL including concept extraction from text, concept graph extraction from text, and concept learning.
no code implementations • 4 Mar 2023 • Daniel T. Chang
As part of the recent research effort on quantum natural language processing (QNLP), variational quantum sentence classifiers (VQSCs) have been implemented and supported in lambeq / DisCoPy, based on the DisCoCat model of sentence meaning.
no code implementations • 8 Nov 2022 • Daniel T. Chang
For machine learning, the notion of similarity assumes that points close in the feature space should be close in the machine learning task space.
no code implementations • 28 Sep 2022 • Daniel T. Chang
In this paper we discuss some important aspects of PQCs with quantum kernels including PQCs, quantum kernels, quantum kernels with quantum advantage, and the trainability of quantum kernels.
no code implementations • 16 Jul 2022 • Daniel T. Chang
Deep learning for molecular science has so far mainly focused on 2D molecular graphs.
no code implementations • 13 May 2022 • Daniel T. Chang
We discuss the use of dual embodied-symbolic concept representations for molecular graph representation learning, specifically with exemplar-based contrastive self-supervised learning (SSL).
no code implementations • 1 Mar 2022 • Daniel T. Chang
As such, we further advocate the use of dual embodied-symbolic concept representations for deep learning.
no code implementations • 5 Feb 2022 • Daniel T. Chang
We discuss extending CSSL (1) to be based mainly on exemplars and only secondly on data augmentation, and (2) to apply to both unlabeled data (a large amount is available in general) and labeled data (a few exemplars can be obtained with valuable supervised knowledge).
no code implementations • 10 Dec 2021 • Daniel T. Chang
Concept-oriented deep learning (CODL) is a general approach to meet the future challenges for deep learning: (1) learning with little or no external supervision, (2) coping with test examples that come from a different distribution than the training examples, and (3) integrating deep learning with symbolic AI.
no code implementations • 14 Jul 2021 • Daniel T. Chang
To support this, we propose hybrid Bayesian neural networks with functional probabilistic layers that encode function (and activation) uncertainty.
no code implementations • 22 Jun 2021 • Daniel T. Chang
Since these probabilistic layers are designed to be drop-in replacement of their deterministic counter parts, Bayesian neural networks provide a direct and natural way to extend conventional deep neural networks to support probabilistic deep learning.
no code implementations • 31 May 2021 • Daniel T. Chang
Probabilistic deep learning is deep learning that accounts for uncertainty, both model uncertainty and data uncertainty.
no code implementations • 6 Jul 2020 • Daniel T. Chang
To facilitate the incorporation of geometry in deep learning on 3D graphs, we propose a message-passing graph convolutional network based on the distance-geometric graph representation: DG-GCN (distance-geometric graph convolution network).
no code implementations • 2 Jun 2020 • Daniel T. Chang
The geometry of three-dimensional (3D) graphs, consisting of nodes and edges, plays a crucial role in many important applications.
no code implementations • 11 Dec 2019 • Daniel T. Chang
To find the best configuration for these hyperparameters in such a high dimensional space, with time-consuming and expensive model training / validation, is not a trivial challenge.
no code implementations • 24 Oct 2019 • Daniel T. Chang
In this paper, we discuss the use of tiered graph autoencoders together with graph prediction for molecular graphs.
no code implementations • 22 Aug 2019 • Daniel T. Chang
As a result of using tiered graph autoencoders for deep learning, each molecular graph possesses tiered latent representations.
no code implementations • 21 Mar 2019 • Daniel T. Chang
Flat latent representations (node embeddings or graph embeddings) fail to represent, and support the use of, groups.
no code implementations • 11 Feb 2019 • Daniel T. Chang
Probabilistic generative deep learning for molecular design involves the discovery and design of new molecules and analysis of their structure, properties and activities by probabilistic generative models using the deep learning approach.
no code implementations • 26 Dec 2018 • Daniel T. Chang
Latent representations are the essence of deep generative models and determine their usefulness and power.
no code implementations • 15 Nov 2018 • Daniel T. Chang
Generative concept representations have three major advantages over discriminative ones: they can represent uncertainty, they support integration of learning and reasoning, and they are good for unsupervised and semi-supervised learning.
no code implementations • 5 Jun 2018 • Daniel T. Chang
Concepts are the foundation of human deep learning, understanding, and knowledge integration and transfer.