Interpreting medical images is a difficult task. In addition to there being natural variability between patients, some imaging modalities are subject to large variance in viewing angle, high levels of image artifacts, and generate large volumes of data that must be read by a radiologist for each case. These difficulties manifest in high diagnostic error rates. We are interested in addressing these issues by using machine learning to develop medical education tools for medical trainees. Our goal is to develop three types of automatic feedback for trainees: 1) Highlighting which parts of an image are the most important to pay attention to, 2) visualizing what the machine learning model sees when it breaks down an image to make a diagnosis, and 3) explaining the diagnostic decision by describing how the various clinical and visual features weigh against one another.