However, the image reconstruction process within the MRI pipeline, which requires the use of complex hardware and adjustment of a large number of scanner parameters, is highly susceptible to noise of various forms, resulting in arbitrary artifacts within the images.
no code implementations • 27 Jan 2023 • Raghav Singhal, Mukund Sudarshan, Anish Mahishi, Sri Kaushik, Luke Ginocchio, Angela Tong, Hersh Chandarana, Daniel K. Sodickson, Rajesh Ranganath, Sumit Chopra
We hypothesise that the disease classification task can be solved using a very small tailored subset of k-space data, compared to image reconstruction.
Insufficient training data and severe class imbalance are often limiting factors when developing machine learning models for the classification of rare diseases.
A framework for training and evaluating AI models on a variety of openly available dialogue datasets.
While Truncated Back-Propagation through Time (BPTT) is the most popular approach to training Recurrent Neural Networks (RNNs), it suffers from being inherently sequential (making parallelization difficult) and from truncating gradient flow between distant time-steps.
A good dialogue agent should have the ability to interact with users by both responding to questions and by asking questions, and importantly to learn from both types of interaction.
An important aspect of developing conversational agents is to give a bot the ability to improve through communicating with humans and to learn from the mistakes that it makes.
A long-term goal of machine learning is to build intelligent conversational agents.
Many natural language processing applications use language models to generate text.
Ranked #14 on Machine Translation on IWSLT2015 German-English
We introduce a new test of how well language models capture meaning in children's books.
Summarization based on text extraction is inherently limited, but generation-style abstractive methods have proven challenging to build.
Ranked #1 on Extractive Text Summarization on DUC 2004 Task 1
Training large-scale question answering systems is complicated because training sources usually cover a small portion of the range of possible questions.
Ranked #1 on Question Answering on WebQuestions (F1 metric)
One long-term goal of machine learning research is to produce methods that are applicable to reasoning and natural language, in particular building an intelligent dialogue agent.
In this paper, we show that learning longer term patterns in real data, such as in natural language, is perfectly possible using gradient descent.
We propose a strong baseline model for unsupervised feature learning using video data.
Training our system using pairs of questions and structured representations of their answers, and pairs of question paraphrases, yields competitive results on a competitive benchmark of the literature.
Ranked #2 on Question Answering on WebQuestions (F1 metric)