Hoon, Niels H. L. C. deEisemann, ElmarVilanova, AnnaKai Lawonn and Noeska Smit and Douglas Cunningham2017-06-122017-06-122017978-3-03868-041-3https://doi.org/10.2312/eurorv3.20171110https://diglib.eg.org:443/handle/10.2312/eurorv320171110The evaluation of visualization methods or designs often relies on user studies. Apart from the difficulties involved in the design of the study itself, the existing mechanisms to obtain sound conclusions are often unclear. In this work, we review and summarize some of the common statistical techniques that can be used to validate a claim in the scenarios that are commonly present in user studies in visualization, i.e., hypothesis testing. Usually, the number of participants is small and the mean and variance of the distribution are not known. Therefore, we will focus on the techniques that are adequate within these limitations. Our aim for this paper is to clarify the goals and limitations of hypothesis testing from a user study perspective, that can be interesting for the visualization community. We provide an overview of the most common mistakes made when testing a hypothesis that can lead to erroneous claims. We also present strategies to avoid those.G.3 [Mathematics of Computing]Probability and StatisticsExperimental DesignFrom a User Study to a Valid Claim: How to Test Your Hypothesis and Avoid Common Pitfalls10.2312/eurorv3.2017111025-28