Raji, MohammadHota, AlokSisneros, RobertMessmer, PeterHuang, JianAlexandru Telea and Janine Bennett2017-06-122017-06-122017978-3-03868-034-51727-348Xhttp://dx.doi.org/10.2312/pgv.20171091https://diglib.eg.org:443/handle/10.2312/pgv20171091In this work, we pose the question of whether, by considering qualitative information such as a sample target image as input, one can produce a rendered image of scientific data that is similar to the target. The algorithm resulting from our research allows one to ask the question of whether features like those in the target image exists in a given dataset. In that way, our method is one of imagery query or reverse engineering, as opposed to manual parameter tweaking of the full visualization pipeline. For target images, we can use real-world photographs of physical phenomena. Our method leverages deep neural networks and evolutionary optimization. Using a trained similarity function that measures the difference between renderings of a phenomenon and real-world photographs, our method optimizes rendering parameters. We demonstrate the efficacy of our method using a superstorm simulation dataset and images found online. We also discuss a parallel implementation of our method, which was run on NCSA's Blue Waters.Humancentered computingScientific visualizationComputing methodologiesMachine learningPhoto-Guided Exploration of Volume Data Features10.2312/pgv.2017109131-39