EuroRVVV17

Permanent URI for this collection

Barcelona, Spain, 12 - 13 June 2017
Perceptual Experiments and Insights
A Crowdsourced Approach to Colormap Assessment
Terece L. Turton, Colin Ware, Francesca Samsel, and David H. Rogers
Evaluating the Perceptual Uniformity of Color Sequences for Feature Discrimination
Colin Ware, Terece L. Turton, Francesca Samsel, Roxana Bujack, and David H. Rogers
Where'd it go? How Geographic and Force-directed Layouts Affect Network Task Performance
Scott A. Hale, Graham McNeill, and Jonathan Bright
Evaluation Guidelines
Guidelines and Recommendations for the Evaluation of New Visualization Techniques by Means of Experimental Studies
Maria Luz, Kai Lawonn, and Christian Hansen
From a User Study to a Valid Claim: How to Test Your Hypothesis and Avoid Common Pitfalls
Niels H. L. C. de Hoon, Elmar Eisemann, and Anna Vilanova

BibTeX (EuroRVVV17)
@inproceedings{
10.2312:eurorv3.20171106,
booktitle = {
EuroVis Workshop on Reproducibility, Verification, and Validation in Visualization (EuroRV3)},
editor = {
Kai Lawonn and Noeska Smit and Douglas Cunningham
}, title = {{
A Crowdsourced Approach to Colormap Assessment}},
author = {
Turton, Terece L.
and
Ware, Colin
and
Samsel, Francesca
and
Rogers, David H.
}, year = {
2017},
publisher = {
The Eurographics Association},
ISBN = {978-3-03868-041-3},
DOI = {
10.2312/eurorv3.20171106}
}
@inproceedings{
10.2312:eurorv3.20171107,
booktitle = {
EuroVis Workshop on Reproducibility, Verification, and Validation in Visualization (EuroRV3)},
editor = {
Kai Lawonn and Noeska Smit and Douglas Cunningham
}, title = {{
Evaluating the Perceptual Uniformity of Color Sequences for Feature Discrimination}},
author = {
Ware, Colin
and
Turton, Terece L.
and
Samsel, Francesca
and
Bujack, Roxana
and
Rogers, David H.
}, year = {
2017},
publisher = {
The Eurographics Association},
ISBN = {978-3-03868-041-3},
DOI = {
10.2312/eurorv3.20171107}
}
@inproceedings{
10.2312:eurorv3.20171109,
booktitle = {
EuroVis Workshop on Reproducibility, Verification, and Validation in Visualization (EuroRV3)},
editor = {
Kai Lawonn and Noeska Smit and Douglas Cunningham
}, title = {{
Guidelines and Recommendations for the Evaluation of New Visualization Techniques by Means of Experimental Studies}},
author = {
Luz, Maria
and
Lawonn, Kai
and
Hansen, Christian
}, year = {
2017},
publisher = {
The Eurographics Association},
ISBN = {978-3-03868-041-3},
DOI = {
10.2312/eurorv3.20171109}
}
@inproceedings{
10.2312:eurorv3.20171108,
booktitle = {
EuroVis Workshop on Reproducibility, Verification, and Validation in Visualization (EuroRV3)},
editor = {
Kai Lawonn and Noeska Smit and Douglas Cunningham
}, title = {{
Where'd it go? How Geographic and Force-directed Layouts Affect Network Task Performance}},
author = {
Hale, Scott A.
and
McNeill, Graham
and
Bright, Jonathan
}, year = {
2017},
publisher = {
The Eurographics Association},
ISBN = {978-3-03868-041-3},
DOI = {
10.2312/eurorv3.20171108}
}
@inproceedings{
10.2312:eurorv3.20171110,
booktitle = {
EuroVis Workshop on Reproducibility, Verification, and Validation in Visualization (EuroRV3)},
editor = {
Kai Lawonn and Noeska Smit and Douglas Cunningham
}, title = {{
From a User Study to a Valid Claim: How to Test Your Hypothesis and Avoid Common Pitfalls}},
author = {
Hoon, Niels H. L. C. de
and
Eisemann, Elmar
and
Vilanova, Anna
}, year = {
2017},
publisher = {
The Eurographics Association},
ISBN = {978-3-03868-041-3},
DOI = {
10.2312/eurorv3.20171110}
}

Browse

Recent Submissions

Now showing 1 - 6 of 6
  • Item
    EuroRV3 2017: Frontmatter
    (Eurographics Association, 2017) Lawonn, Kai; Smit, Noeska; Cunningham, Douglas;
  • Item
    A Crowdsourced Approach to Colormap Assessment
    (The Eurographics Association, 2017) Turton, Terece L.; Ware, Colin; Samsel, Francesca; Rogers, David H.; Kai Lawonn and Noeska Smit and Douglas Cunningham
    Despite continual research and discussion on the perceptual effects of color in scientific visualization, psychophysical testing is often limited. In-person lab studies can be expensive and time-consuming while results can be difficult to extrapolate from meticulously controlled laboratory conditions to the real world of the visualization user. We draw on lessons learned from the use of crowdsourced participant pools in the behavioral sciences and information visualization to apply a crowdsourced approach to a classic psychophysical experiment assessing the ability of a colormap to impart metric information. We use an online presentation analogous to the color key task from Ware's 1988 paper, Color Sequences for Univariate Maps, testing colormaps similar to those in the original paper along with contemporary colormap standards and new alternatives in the scientific visualization domain. We explore the issue of potential contamination from color deficient participants and establish that perceptual color research can appropriately leverage a crowdsourced participant pool without significant CVD concerns. The updated version of the Ware color key task also provides a method to assess and compare colormaps.
  • Item
    Evaluating the Perceptual Uniformity of Color Sequences for Feature Discrimination
    (The Eurographics Association, 2017) Ware, Colin; Turton, Terece L.; Samsel, Francesca; Bujack, Roxana; Rogers, David H.; Kai Lawonn and Noeska Smit and Douglas Cunningham
    Probably the most common method for visualizing univariate data maps is through pseudocoloring and one of the most commonly cited requirements of a good colormap is that it be perceptually uniform. This means that differences between adjacent colors in the sequence be equally distinct. The practical value of uniformity is for features in the data to be equally distinctive no matter where they lie in the colormap, but there are reasons for thinking that uniformity in terms of feature detection may not be achieved by current methods which are based on the use of uniform color spaces. In this paper we provide a new method for directly evaluating colormaps in terms of their capacity for feature resolution. We apply the method in a study using Amazon Mechanical Turk to evaluate seven colormaps. Among other findings the results show that two new double ended sequences have the highest discriminative power and good uniformity. Ways in which the technique can be applied include the design of colormaps for uniformity, and a method for evaluating colormaps through feature discrimination curves for differently sized features.
  • Item
    Guidelines and Recommendations for the Evaluation of New Visualization Techniques by Means of Experimental Studies
    (The Eurographics Association, 2017) Luz, Maria; Lawonn, Kai; Hansen, Christian; Kai Lawonn and Noeska Smit and Douglas Cunningham
    This paper addresses important issues in the evaluation of new visualization techniques. It describes the principle of quantitative research in general and presents the idea of experimental studies. The goal of experimental studies is to provide the base for guidelines, which allow testing of hypotheses that newly-developed visualization solutions are better than older ones. Moreover, the paper provides guidelines for successful planning of experimental studies in terms of independent and dependent variables, participants, tasks, data collection and statistical evaluation of collected data. It describes how the results should be interpreted and reported in publications. Finally, the paper points out useful literature and thus contributes to a better understanding of how to evaluate new visualization techniques.
  • Item
    Where'd it go? How Geographic and Force-directed Layouts Affect Network Task Performance
    (The Eurographics Association, 2017) Hale, Scott A.; McNeill, Graham; Bright, Jonathan; Kai Lawonn and Noeska Smit and Douglas Cunningham
    When visualizing geospatial network data, it is possible to position nodes according to their geographic locations or to position nodes using standard network layout algorithms that ignore geographic location. Such data is increasingly common in interactive displays of Internet-connected sensor data, but network layouts that ignore geographic location data are rarely employed. We conduct a user experiment to compare the effects of geographic and force-directed network layouts on three common network tasks: locating a node, determining the path length between two nodes, and comparing the degree of two nodes. We found a geographic layout was superior for locating a node but inferior for determining the path length between two nodes. The two layouts performed similarly when participants compared the degree of two nodes. We also tested a relaxed- or pseudogeographic layout created with multidimensional scaling and found it performed as well or better than the pure geographic layout on all tasks but remained inferior to the force-directed layout for the path-length task. We suggest interactive displays of geospatial network data allow viewers to switch between geographic and force-directed layouts, although further research is needed to understand the extent to which viewers are able to choose the most appropriate layout for a given task.
  • Item
    From a User Study to a Valid Claim: How to Test Your Hypothesis and Avoid Common Pitfalls
    (The Eurographics Association, 2017) Hoon, Niels H. L. C. de; Eisemann, Elmar; Vilanova, Anna; Kai Lawonn and Noeska Smit and Douglas Cunningham
    The evaluation of visualization methods or designs often relies on user studies. Apart from the difficulties involved in the design of the study itself, the existing mechanisms to obtain sound conclusions are often unclear. In this work, we review and summarize some of the common statistical techniques that can be used to validate a claim in the scenarios that are commonly present in user studies in visualization, i.e., hypothesis testing. Usually, the number of participants is small and the mean and variance of the distribution are not known. Therefore, we will focus on the techniques that are adequate within these limitations. Our aim for this paper is to clarify the goals and limitations of hypothesis testing from a user study perspective, that can be interesting for the visualization community. We provide an overview of the most common mistakes made when testing a hypothesis that can lead to erroneous claims. We also present strategies to avoid those.