no code implementations • 5 Oct 2023 • Jason Holmes, Lian Zhang, Yuzhen Ding, Hongying Feng, Zhengliang Liu, Tianming Liu, William W. Wong, Sujay A. Vora, Jonathan B. Ashman, Wei Liu
Conclusions: Given the accuracy of GPT-4 in re-labeling structure names of both target volumes and normal tissues as presented in this work, LLMs are poised to be the preferred method for standardizing structure names in radiation oncology, especially considering the rapid advancements in LLM capabilities that are likely to continue.
no code implementations • 21 Apr 2023 • Yuzhen Ding, Hongying Feng, Yunze Yang, Jason Holmes, Zhengliang Liu, David Liu, William W. Wong, Nathan Y. Yu, Terence T. Sio, Steven E. Schild, Baoxin Li, Wei Liu
Conclusion: A patient-specific vision-transformer-based network was developed and shown to be accurate and efficient to reconstruct 3D CT images from kV images.
no code implementations • 1 Apr 2023 • Jason Holmes, Zhengliang Liu, Lian Zhang, Yuzhen Ding, Terence T. Sio, Lisa A. McGee, Jonathan B. Ashman, Xiang Li, Tianming Liu, Jiajian Shen, Wei Liu
We present the first study to investigate Large Language Models (LLMs) in answering radiation oncology physics questions.
no code implementations • 20 Jul 2020 • Nupur Thakur, Yuzhen Ding, Baoxin Li
Though deep neural networks (DNNs) have shown superiority over other techniques in major fields like computer vision, natural language processing, robotics, recently, it has been proven that they are vulnerable to adversarial attacks.
no code implementations • 20 Jul 2020 • Yuzhen Ding, Nupur Thakur, Baoxin Li
Researches have shown that deep neural networks are vulnerable to malicious attacks, where adversarial images are created to trick a network into misclassification even if the images may give rise to totally different labels by human eyes.
no code implementations • 15 Jan 2020 • Yuzhen Ding, Baoxin Li
When applying a topic model, a relatively standard pre-processing step is to first build a vocabulary of frequent words.