Life-long learning aims at learning a sequence of tasks without forgetting the previously acquired knowledge.
The core idea of KF lies in the modularization and assemblability of knowledge: given a pretrained network model as input, KF aims to decompose it into several factor networks, each of which handles only a dedicated task and maintains task-specific knowledge factorized from the source network.
The key construction of our approach is the nonparametric space-time intensity function, governed by a latent process.
1 code implementation • • Meng Zhou, Zechen Li, Bowen Tan, Guangtao Zeng, Wenmian Yang, Xuehai He, Zeqian Ju, Subrato Chakravorty, Shu Chen, Xingyi Yang, Yichen Zhang, Qingyang Wu, Zhou Yu, Kun Xu, Eric Xing, Pengtao Xie
Training complex dialog generation models on small datasets bears high risk of overfitting.
To the best of our knowledge, this is the first neural point process model that can jointly predict both the space and time of events.
Most 3D reconstruction methods may only recover scene properties up to a global scale ambiguity.
There has not been a clear understanding on what properties of data and tasks render one approach outperforms the other.
To address this problem, we develop methods to generate view-consistent, high-fidelity, and high-resolution X-ray images from radiology reports to facilitate radiology training of medical students.
On these two datasets, we train several dialogue generation models based on Transformer, GPT, and BERT-GPT.
Besides, these works require a large number of CTs to train accurate diagnosis models, which are difficult to obtain.
Using this dataset, we develop diagnosis methods based on multi-task learning and self-supervised learning, that achieve an F1 of 0. 90, an AUC of 0. 98, and an accuracy of 0. 89.