Exploring the effects of non-monetary reimbursement for participants in HCI research
DOI:
https://doi.org/10.15346/hc.v4i1.1Keywords:
Participation, Citizen Science, Quantified Self, Online Experiments, ReimbursementAbstract
When running experiments within the field of Human Computer Interaction (HCI) it is common practice to ask participants to come to a specified lab location, and reimburse them monetarily for their time and travel costs. This, however, is not the only means by which to encourage participation in scientific study. Citizen science projects, which encourage the public to become involved in scientific research, have had great success in getting people to act as sensors to collect data or to volunteer their idling computer or brain power to classify large data sets across a broad range of fields including biology, cosmology and physical and environmental science. This is often done without the expectation of payment. Additionally, data collection need not be done on behalf of an external researcher; the Quantified Self (QS) movement allows people to reflect on data they have collected about themselves. This too, then, is a form of non-reimbursed data collection. Here we investigate whether citizen HCI scientists and those interested in personal data produce reliable results compared to participants in more traditional lab-based studies. Through six studies, we explore how participation rates and data quality are affected by recruiting participants without monetary reimbursement: either by providing participants with data about themselves as reward (a QS approach), or by simply requesting help with no extrinsic reward (as in citizen science projects). We show that people are indeed willing to take part in online HCI research in the absence of extrinsic monetary reward, and that the data generated by participants who take part for selfless reasons, rather than for monetary reward, can be as high quality as data gathered in the lab and in addition may be of higher quality than data generated by participants given monetary reimbursement online. This suggests that large HCI experiments could be run online in the future, without having to incur the equally large reimbursement costs alongside the possibility of running experiments in environments outside of the lab.References
Amazon Mechanical Turk. (2015). Retrieved from https://www.mturk.com
Bertamini, M., & Munafo, M. R. (2012). Bite-size science and its undesired side effects. Perspectives on Psychological Science, 7(1), 67–71. http://doi.org/10.1177/1745691611429353
Bonney, R., Ballard, H., Jordan, R., McCallie, E., Phillips, T., Shirk, J., & Wilderman, C. C. (2009). Public participation in scientific research: defining the field and assessing its potential for informal science education. A CAISE Inquiry Group Report.
Bonney, R., Cooper, C. B., Dickinson, J., Kelling, S., Phillips, T., Rosenberg, K. V., & Shirk, J. (2009). Citizen Science: A Developing Tool for Expanding Science Knowledge and Scientific Literacy. BioScience, 59(11), 977–984. http://doi.org/10.1525/bio.2009.59.11.9
Bravata, D. M., Smith-Spangler, C., Sundaram, V., Gienger, A. L., Lin, N., Lewis, R., … Sirard, J. R. (2007). Using pedometers to increase physical activity and improve health: a systematic review. Jama, 298(19), 2296–2304. http://doi.org/10.1001/jama.298.19.2296
Brumby, D. P., Cox, A. L., Back, J., & Gould, S. J. J. (2013). Recovering from an interruption: Investigating speed−accuracy trade-offs in task resumption behavior. Journal of Experimental Psychology: Applied, 19(2), 95–107. http://doi.org/10.1037/a0032696
Buhrmester, M., Kwang, T., & Gosling, S. D. (2011). Amazon’s Mechanical Turk: A New Source of Inexpensive, Yet High-Quality, Data? Perspectives on Psychological Science, 6(1), 3–5. http://doi.org/10.1177/1745691610393980
Crump, M. J. C., McDonnell, J. V, & Gureckis, T. M. (2013). Evaluating Amazon’s Mechanical Turk as a tool for experimental behavioral research. PloS One, 8(3), e57410. http://doi.org/10.1371/journal.pone.0057410
Curtis, D. (2014). The man who records all his sneezes. Retrieved from http://www.bbc.co.uk/news/magazine-29525501
Dandurand, F., Shultz, T. R., & Onishi, K. H. (2008). Comparing online and lab methods in a problem-solving experiment. Behavior Research Methods, 40(2), 428–434. http://doi.org/10.3758/BRM.40.2.428
Dunlop, M., Komninos, A., & Durga, N. (2014). Towards High Quality Text Entry on Smartwatches. In CHI’14 Extended Abstracts on Human Factors in Computing Systems (pp. 2365–2370). http://doi.org/10.1145/2559206.2581319
Dunlop, M., & Levine, J. (2012). Multidimensional pareto optimization of touchscreen keyboards for speed, familiarity and improved spell checking. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 2669–2678). ACM Press. http://doi.org/10.1145/2207676.2208659
Evans, C., Abrams, E., & Reitsma, R. (2005). The Neighborhood Nestwatch Program: Participant Outcomes of a Citizen‐Science Ecological Research Project. Conservation Biology, 19, 589–594. http://doi.org/10.1111/j.1523-1739.2005.00s01.x
Fold It. (2015). Fold It. Retrieved from http://fold.it/
Fritz, T., Huang, E. M., Murphy, G. C., & Zimmermann, T. (2014). Persuasive technology in the real world. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 487–496). http://doi.org/10.1145/2556288.2557383
Galaxy Zoo. (2015). Galaxy Zoo. Retrieved September 18, 2013, from http://www.galaxyzoo.org/
Germine, L., Nakayama, K., Duchaine, B. C., Chabris, C. F., Chatterjee, G., & Wilmer, J. B. (2012). Is the Web as good as the lab? Comparable performance from Web and lab in cognitive/perceptual experiments. Psychonomic Bulletin & Review, 19(5), 847–57. http://doi.org/10.3758/s13423-012-0296-9
Google Charts. (2015). Retrieved from https://developers.google.com/chart/
Gould, S. J. J., Cox, A. L., Brumby, D. P., & Wiseman, S. (2015). Home is Where the Lab is: A Comparison of Online and Lab Data From a Time-sensitive Study of Interruption. Human Computation, 45–67. http://doi.org/10.15346/hc.v2i1.4
Halberda, J., Ly, R., Wilmer, J. B., Naiman, D. Q., & Germine, L. (2012). Number sense across the lifespan as revealed by a massive Internet-based sample. Proceedings of the National Academy of Sciences of the United States of America, 109(28), 11116–20. http://doi.org/10.1073/pnas.1200196109
Heer, J., & Bostock, M. (2010). Crowdsourcing Graphical Perception: Using Mechanical Turk to Assess Visualization Design. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 203–212). http://doi.org/10.1145/1753326.1753357
Jennett, C., Eveleigh, A., Mathieu, K., Ajani, Z., & Cox, A. L. (2013). Creativity in citizen science: All for one and one for all. In ACM WebSci 2013, “Creativity and Attention in the Age of the Web” Workshop.
Jennett, C., Furniss, D. J., Iacovides, I., Wiseman, S., Gould, S. J. J., & Cox, A. L. (2014). Exploring Citizen Psych-Science and the Motivations of Errordiary Volunteers. Human Computation, 1(2), 199–218. http://doi.org/10.15346/hc.v1i2.10
Khatib, F., DiMaio, F., Cooper, S., Kazmierczyk, M., Gilski, M., Krzywda, S., … Baker, D. (2011). Crystal structure of a monomeric retroviral protease solved by protein folding game players. Nature Structural & Molecular Biology, 18(10), 1175–7. http://doi.org/10.1038/nsmb.2119
Kittur, A., Chi, E. H., & Suh, B. (2008). Crowdsourcing user studies with Mechanical Turk. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 453–456). Florence, Italy: ACM. http://doi.org/10.1145/1357054.1357127
Komarov, S., Reinecke, K., & Gajos, K. Z. (2013). Crowdsourcing Performance Evaluations of User Interfaces. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM. http://doi.org/10.1145/2470654.2470684
Lab In The Wild. (2015). Retrieved from www.labinthewild.org
Li, I., Dey, A., & Forlizzi, J. (2010). A stage-based model of personal informatics systems. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 557–566). ACM Press. http://doi.org/10.1145/1753326.1753409
MacKenzie, I. S., & Soukoreff, R. W. (2003). Phrase sets for evaluating text entry techniques. In CHI ’03 extended abstracts on Human factors in computer systems - CHI '03 (pp. 754–755). ACM Press. http://doi.org/10.1145/765968.765971
Mason, W., & Suri, S. (2012). Conducting behavioral research on Amazon’s Mechanical Turk. Behavior Research Methods, 44(1), 1–23. http://doi.org/10.3758/s13428-011-0124-6
Mason, W., & Watts, D. (2010). Financial incentives and the performance of crowds. ACM SIGKDD Explorations Newsletter, 11(2), 100–108. http://doi.org/10.1145/1809400.1809422
Munson, S. (2012). Mindfulness, Reflection, and Persuasion in Personal Informatics. In CHI 2012 Personal Informatics in Practice: Improving Quality of Life Through Data Workshop. Retrieved from http://personalinformatics.org/docs/chi2012/munson.pdf
Old Weather. (2015). Old Weather.
Raddick, M. J., Bracey, G., Gay, P. L., Lintott, C. J., Cardamone, C., Murray, P., … Vandenberg, J. (2013). Galaxy zoo: Motivations of citizen scientists. Astronomy Education Review, 12(1), 1–41. http://doi.org/10.3847/AER2011021
Raddick, M. J., Bracey, G., Gay, P. L., Lintott, C. J., Murray, P., Schawinski, K., … Vandenberg, J. (2010). Galaxy Zoo: Exploring the Motivations of Citizen Science Volunteers. Astronomy Education Review, 9. http://doi.org/10.3847/aer2009036
Ramsey, S. R., Thompson, K. L., Mckenzie, M., & Rosenbaum, A. (2016). Psychological research in the internet age: The quality of web-based data. Computers in Human Behavior, 58, 354–360. http://doi.org/10.1016/j.chb.2015.12.049
Reinecke, K., Arbor, A., & Gajos, K. Z. (2015). LabintheWild : Conducting Large-Scale Online Experiments With Uncompensated Samples. In Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing (pp. 1364–1378). http://doi.org/10.1145/2675133.2675246
Rogers, Y. (2011). Interaction design gone wild: striving for wild theory. Interactions, 18(4), 58–62. http://doi.org/10.1145/1978822.1978834
Rotman, D., Preece, J., Hammock, J., Procita, K., Hanse, D., Parr, C., … Jacobs, D. (2012). Dynamic Changes in Motivation in Collaborative Citizen- Science Projects. In Proceedings of the ACM 2012 conference on Computer Supported Cooperative Work (pp. 1–10). http://doi.org/10.1145/2145204.2145238
Salthouse, T. A. (1986). Perceptual, cognitive, and motoric aspects of transcription typing. Psychological Bulletin, 99(3), 303–319. http://doi.org/10.1037/0033-2909.99.3.303
Salthouse, T. A., & Saults, J. S. (1987). Multiple spans in transcription typing. The Journal of Applied Psychology, 72(2), 187–96. http://doi.org/10.1037/0021-9010.72.2.187
Swan, M. (2012). Sensor Mania! The Internet of Things, Wearable Computing, Objective Metrics, and the Quantified Self 2.0. Journal of Sensor and Actuator Networks, 1(3), 217–253. http://doi.org/10.3390/jsan1030217
Swan, M. (2013). The Quantified Self: Fundamental Disruption in Big Data Science and Biological Discovery. Big Data, 1(2), 85–99. http://doi.org/10.1089/big.2012.0002
Test My Brain. (2015). Retrieved from www.testmybrain.org
The Royal Society for the Protection of Birds. (2015). The RSPB: Big Garden Birdwatch. Retrieved January 1, 2015, from https://www.rspb.org.uk/discoverandenjoynature/discoverandlearn/birdwatch/index.aspx
Downloads
Published
How to Cite
Issue
Section
License
Authors who publish with this journal agree to the following terms:- Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access).