no code implementations • NeurIPS Workshop Deep_Invers 2021 • Burhaneddin Yaman, Seyed Amir Hossein Hosseini, Mehmet Akcakaya
In the presence of models pre-trained on a database, we show that the proposed approach can be adapted as subject-specific fine-tuning via transfer learning to further improve reconstruction quality.
2 code implementations • 29 Apr 2020 • Arjun D. Desai, Francesco Caliva, Claudia Iriondo, Naji Khosravan, Aliasghar Mortazi, Sachin Jambawalikar, Drew Torigian, Jutta Ellermann, Mehmet Akcakaya, Ulas Bagci, Radhika Tibrewala, Io Flament, Matthew O`Brien, Sharmila Majumdar, Mathias Perslev, Akshay Pai, Christian Igel, Erik B. Dam, Sibaji Gaj, Mingrui Yang, Kunio Nakamura, Xiaojuan Li, Cem M. Deniz, Vladimir Juras, Ravinder Regatte, Garry E. Gold, Brian A. Hargreaves, Valentina Pedoia, Akshay S. Chaudhari
Purpose: To organize a knee MRI segmentation challenge for characterizing the semantic and clinical efficacy of automatic segmentation methods relevant for monitoring osteoarthritis progression.
no code implementations • 1 Apr 2019 • Florian Knoll, Kerstin Hammernik, Chi Zhang, Steen Moeller, Thomas Pock, Daniel K. Sodickson, Mehmet Akcakaya
Both linear and non-linear methods are covered, followed by a discussion of recent efforts to further improve parallel imaging using machine learning, and specifically using artificial neural networks.
1 code implementation • 23 Nov 2016 • Gang Wang, Liang Zhang, Georgios B. Giannakis, Mehmet Akcakaya, Jie Chen
Upon formulating sparse PR as an amplitude-based nonconvex optimization task, SPARTA works iteratively in two stages: In stage one, the support of the underlying sparse signal is recovered using an analytically well-justified rule, and subsequently, a sparse orthogonality-promoting initialization is obtained via power iterations restricted on the support; and, in the second stage, the initialization is successively refined by means of hard thresholding based gradient-type iterations.
Information Theory Information Theory Optimization and Control