no code implementations • 10 Aug 2022 • Brian Lester, Joshua Yurtsever, Siamak Shakeri, Noah Constant
Parameter-efficient methods are able to use a single frozen pre-trained large language model (LLM) to perform many tasks by learning task-specific soft prompts that modulate model behavior when concatenated to the input text.
no code implementations • NeurIPS Workshop Deep_Invers 2019 • Kristina Monakhova, Joshua Yurtsever, Grace Kuo, Nick Antipa, Kyrollos Yanny, Laura Waller
Various reconstruction methods are explored, on a scale from classic iterative approaches (based on the physical imaging model) to deep learned methods with many learned parameters.
no code implementations • 30 Aug 2019 • Kristina Monakhova, Joshua Yurtsever, Grace Kuo, Nick Antipa, Kyrollos Yanny, Laura Waller
In this work, we address these limitations using a bounded-compute, trainable neural network to reconstruct the image.