Contained in:
Book Chapter

Multipoint vs slider: a protocol for experiments

  • Venera Tomaselli
  • Giulio Giacomo Cantone

Since the broad diffusion of Computer-Assisted survey tools (i.e. web surveys), a lively debate about innovative scales of measure arose among social scientists and practitioners. Implications are relevant for applied Statistics and evaluation research since while traditional scales collect ordinal observations, data from sliders can be interpreted as continuous. Literature, however, report excessive times of completion of the task from sliders in web surveys. This experimental protocol is aimed at testing hypotheses on the accuracy in prediction and dispersion of estimates from anonymous participants who are recruited online and randomly assigned into tasks in recognition of shades of colour. The treatment variable is two scales: a traditional multipoint 0-10 multipoint vs a slider 0-100. Shades have a unique parametrisation (true value) and participants have to guess the true value through the scale. These tasks are designed to recreate situations of uncertainty among participants while minimizing the subjective component of a perceptual assessment and maximizing information about scale-driven differences and biases. We propose to test statistical differences in the treatment variable: (i) mean absolute error from the true value (ii), time of completion of the task. To correct biases due to the variance in the number of completed tasks among participants, data about participants can be collected through both pre-tasks acceptance of web cookies and post-tasks explicit questions.

  • Keywords:
  • slider scales,
  • colour recognition,
  • web-survey design,
+ Show More

Venera Tomaselli

University of Catania, Italy - ORCID: 0000-0002-2287-7343

Giulio Giacomo Cantone

University of Catania, Italy - ORCID: 0000-0001-7149-5213

  1. Agresti A. (2010). Analysis of Ordinal Categorical Data, Wiley, Hoboken, (NJ).
  2. Askalidis, G., Kim, S.J., Malthouse, E.C. (2017). Understanding and overcoming biases in online review systems. Decision Support Systems, 97, pp. 23-30.
  3. Austin, P.C., Brunner, L.J. (2003). Type I error inflation in the presence of a ceiling effect. The American Statistician, 57(2), pp. 97-104.
  4. Bosch, O.J., Revilla, M., DeCastellarnau, A., Weber, W. (2018). Measurement reliability, validity, and quality of slider versus radio button scales in an online probability-based panel in Norway. Social Science Computer Review. DOI: 10.1177/0894439317750089
  5. Chyung, S.Y.Y., Swanson, I., Roberts, K., Hankinson A. (2018). Evidence-based survey design: The use of continuous rating scales in surveys, Performance Improvement, 57(5), 38-48.
  6. Couper, M.P., Tourangeau, R., Conrad, F.G., Singer, E. (2006). Evaluating the effectiveness of visual analog scales. Social Science Computer Review, 24(2), pp. 227-245.
  7. Finn A., Ranchhod V. (2015). Genuine fakes: The prevalence and implications of data fabrication in a large South African survey. The World Bank Economic Review. DOI: 10.1093/wber/lhv054
  8. Fryer, L.K., Nakao, K. (2020). The future of survey self-report: An experiment contrasting Likert, VAS, slide, and swipe touch interfaces. Frontline Learning Research, 8(3), pp. 10-25.
  9. Funke, F. (2015) A web experiment showing negative effects of slider scales compared to visual analogue scales and radio button scales, Social Science Computer Review, 34(2), pp. 244-254.
  10. Kampen, J., Swyngedouw, M. (2000). The ordinal controversy revisited. Quality & Quantity, 34, pp. 87-102.
  11. Kluver, D., Ekstrand, M. D., Konstan, J. A. (2018). Rating-based collaborative filtering: algorithms and evaluation. In Social Information Access, eds. P. Brusilovsky and D. He, Springer, Charm, (SW), pp. 344-390.
  12. Liu M., Conrad, F.G. (2018). Where should I start? On default values for slider questions in web surveys, Social Science Computer Review. DOI: 10.1177/0894439318755336
  13. Lorenz, J. (2006). Universality in movie rating distributions. The European Physical Journal B. 71, pp. 251-258.
  14. Matejka, J., Glueck, M., Grossman, T., Fitzmaurice, G. (2016). The effect of visual appearance on the performance of continuous sliders and visual analogue scales. Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. DOI: 10.1145/2858036.2858063
  15. Roberts, J.M., Brewer, D.D. (2001). Measures and tests of heaping in discrete quantitative distributions. Journal of Applied Statistics, 28(7), pp. 887-896.
  16. Roster, C.A., Lucianetti L., Albaum, G. (2015). Exploring slider vs. categorical response formats in web-based surveys, Journal of Research Practice, 11(1), Article D1. Retrieved from http://jrp.icaap.org/index.php/jrp/article/view/509/413.
  17. Tomaselli, V., Cantone, G.G. (2020). Evaluating Rank-Coherence of Crowd Rating in Customer Satisfaction. Social Indicators Research. DOI: 10.1007/s11205-020-02581-8
  18. Velleman, P.F., Wilkinson, L. (1993). Nominal, ordinal, interval, and ratio typologies are misleading. American Statistician, 47(1), pp. 65-72.
  19. Voutilainen, A., Pitkäaho, T., Kvist, T., Vehviläinen-Julkunen, K. (2016). How to ask about patient satisfaction? The visual analogue scale is less vulnerable to confounding factors and ceiling effect than a symmetric Likert scale. Journal of Advanced Nursing, 72(4), pp. 946-957.
  20. Zinn, S., Würbach, A. (2015). A statistical approach to address the problem of heaping in self- reported income data. Journal of Applied Statistics, 43(4), pp. 682-703.
PDF
  • Publication Year: 2021
  • Pages: 91-96
  • Content License: CC BY 4.0
  • © 2021 Author(s)

XML
  • Publication Year: 2021
  • Content License: CC BY 4.0
  • © 2021 Author(s)

Chapter Information

Chapter Title

Multipoint vs slider: a protocol for experiments

Authors

Venera Tomaselli, Giulio Giacomo Cantone

Language

English

DOI

10.36253/978-88-5518-304-8.19

Peer Reviewed

Publication Year

2021

Copyright Information

© 2021 Author(s)

Content License

CC BY 4.0

Metadata License

CC0 1.0

Bibliographic Information

Book Title

ASA 2021 Statistics and Information Systems for Policy Evaluation

Book Subtitle

Book of short papers of the opening conference

Editors

Bruno Bertaccini, Luigi Fabbris, Alessandra Petrucci

Peer Reviewed

Publication Year

2021

Copyright Information

© 2021 Author(s)

Content License

CC BY 4.0

Metadata License

CC0 1.0

Publisher Name

Firenze University Press

DOI

10.36253/978-88-5518-304-8

eISBN (pdf)

978-88-5518-304-8

eISBN (xml)

978-88-5518-305-5

Series Title

Proceedings e report

Series ISSN

2704-601X

Series E-ISSN

2704-5846

235

Fulltext
downloads

290

Views

Export Citation

1,297

Open Access Books

in the Catalogue

1,746

Book Chapters

3,070,547

Fulltext
downloads

3,973

Authors

from 817 Research Institutions

of 63 Nations

61

scientific boards

from 334 Research Institutions

of 42 Nations

1,139

Referees

from 343 Research Institutions

of 36 Nations