POnline: An Online Pupil Annotation Tool Employing Crowd-sourcing and Engagement Mechanisms

Authors

DOI:

https://doi.org/10.15346/hc.v6i1.9

Keywords:

Gaze Tracking, Eye Tracking, Pupil Tracking, Annotation, Crowd-sourcing, Collaboration, Wisdom of the Crowd

Abstract

Pupil center and pupil contour are two of the most important features in the eye-image used for video-based eye-tracking. Well annotated databases are needed in order to allow benchmarking of the available- and new pupil detection and gaze estimation algorithms. Unfortunately, creation of such a data set is costly and requires a lot of efforts, including manual work of the annotators. In addition, reliability of manual annotations is hard to establish with a low number of annotators. In order to facilitate progress of the gaze tracking algorithm research, we created an online pupil annotation tool that engages many users to interact through gamification and allows utilization of the crowd power to create reliable annotations \cite{artstein2005bias}. We describe the tool and the mechanisms employed, and report results on the annotation of a publicly available data set. Finally, we demonstrate an example utilization of the new high-quality annotation on a comparison of two state-of-the-art pupil center algorithms.

Author Biography

David Gil de Gómez Pérez, University of Eastern Finland

University of Eastern FinlandSchool of ComputingJunior Researcher (Nuorempi Tutkija)

References

Alexa Internet Inc., . (2016). reddit.com Site Overview. http://www.alexa.com/siteinfo/reddit.com. (2016). Accessed: 2016-11-30.

Allahbakhsh, M, Benatallah, B, Ignjatovic, A, Motahari-Nezhad, H. R, Bertino, E, and Dustdar, S. (2013). Quality control in crowd- sourcing systems: Issues and directions. IEEE Internet Computing 17, 2 (2013), 76–81.

Artstein, R and Poesio, M. (2005). Bias decreases in proportion to the number of annotators. In Proceedings of FG-MoL 2005: The 10th conference on Formal Grammar and The 9th Meeting on. 139.

de Greef, T, Lafeber, H, van Oostendorp, H, and Lindenberg, J. (2009). Eye movement as indicators of mental workload to trigger adaptive automation. Foundations of augmented cognition. Neuroergonomics and operational neuroscience (2009), 219–228.

Duchowski, A. T. (2007). Eye tracking methodology. Theory and practice 328 (2007).

Fuhl, W, Kübler, T, Sippel, K, Rosenstiel, W, and Kasneci, E. (2015). Excuse: Robust pupil detection in real-world scenarios. In International Conference on Computer Analysis of Images and Patterns. Springer, 39–51.

Fuhl, W, Santini, T. C, Kübler, T, and Kasneci, E. (2016). Else: Ellipse selection for robust pupil detection in real-world environments. In Proceedings of the Ninth Biennial ACM Symposium on Eye Tracking Research & Applications. ACM, 123–130.

Garau, M, Slater, M, Vinayagamoorthy, V, Brogni, A, Steed, A, and Sasse, M. A. (2003). The impact of avatar realism and eye gaze control on perceived quality of communication in a shared immersive virtual environment. In Proceedings of the SIGCHI conference on Human factors in computing systems. ACM, 529–536.

Granka, L. A, Joachims, T, and Gay, G. (2004). Eye-tracking analysis of user behavior in WWW search. In Proceedings of the 27th annual international ACM SIGIR conference on Research and development in information retrieval. ACM, 478–479.

Greenberg, C. S, Bansé, D, Doddington, G. R, Garcia-Romero, D, Godfrey, J. J, Kinnunen, T, Martin, A. F, McCree, A, Przybocki, M, and Reynolds, D. A. (2014). The NIST 2014 speaker recognition i-vector machine learning challenge. In Odyssey: The Speaker and Language Recognition Workshop. Anonymous For Review / Human Computation (xxxx) x:x 15

Hansen, D. W and Ji, Q. (2010). In the eye of the beholder: A survey of models for eyes and gaze. Pattern Analysis and Machine Intelligence, IEEE Transactions on 32, 3 (2010), 478–500.

Hosseini, M, Araabi, B, and Soltanian-Zadeh, H. (2010). Pigment Melanin: Pattern for Iris Recognition. Instrumentation and Measure- ment, IEEE Transactions on 59, 4 (april 2010), 792 –804. DOI:http://dx.doi.org/10.1109/TIM.2009.2037996

Kloetzer, L, Schneider, D, Jennett, C, Iacovides, I, Eveleigh, A, Cox, A, and Gold,M. (2014). Learning by volunteer computing, thinking and gaming: What and how are volunteers learning by participating in Virtual Citizen Science? Changing Configurations of Adult Education in Transitional Times (2014), 73.

Koh, J, Kim, Y.-G, Butler, B, and Bock, G.-W. (2007). Encouraging Participation in Virtual Communities. Commun. ACM 50, 2 (Feb. 2007), 68–73. DOI:http://dx.doi.org/10.1145/1216016.1216023

Kraut, R. E and Resnick, P. (2011). Encouraging contribution to online communities. Building successful online communities: Evidence- based social design (2011), 21–76.

Law, B, Atkins, M. S, Kirkpatrick, A. E, and Lomax, A. J. (2004). Eye gaze patterns differentiate novice and experts in a virtual laparoscopic surgery training environment. In Proceedings of the 2004 symposium on Eye tracking research & applications. ACM, 41–48.

Mansouryar, M, Steil, J, Sugano, Y, and Bulling, A. (2016). 3D gaze estimation from 2D pupil positions on monocular head-mounted eye trackers. In Proceedings of the Ninth Biennial ACM Symposium on Eye Tracking Research & Applications. ACM, 197–200.

Nakatsu, R. T, Grossman, E. B, and Iacovou, C. L. (2014). A taxonomy of crowdsourcing based on task complexity. Journal of Information Science 40, 6 (2014), 823–834. DOI:http://dx.doi.org/10.1177/0165551514550140

Przybocki, M, Peterson, K, Bronsart, S, and Sanders, G. (2009). The NIST 2008 Metrics for machine translation challenge-overview, methodology, metrics, and results. Machine Translation 23, 2-3 (2009), 71–103.

Raddick, M. J, Bracey, G, Gay, P. L, Lintott, C. J, Murray, P, Schawinski, K, Szalay, A. S, and Vandenberg, J. (2010). Galaxy Zoo: Exploring the Motivations of Citizen Science Volunteers. Astronomy Education Review 9, 1 (2010), 010103. DOI:http: //dx.doi.org/10.3847/AER2009036

Russell, B. C, Torralba, A,Murphy, K. P, and Freeman,W. T. (2008). LabelMe: A Database andWeb-Based Tool for Image Annotation. International Journal of Computer Vision 77, 1 (2008), 157–173. DOI:http://dx.doi.org/10.1007/s11263-007-0090-8

Swirski, L, Bulling, A, and Dodgson, N. (2012). Robust real-time pupil tracking in highly off-axis images. In Proceedings of the Symposium on Eye Tracking Research and Applications. ACM, 173–176.

Tonsen, M, Zhang, X, Sugano, Y, and Bulling, A. (2016). Labelled pupils in the wild: a dataset for studying pupil detection in unconstrained environments. In Proceedings of the Ninth Biennial ACM Symposium on Eye Tracking Research & Applications. ACM, 139–142.

Von Ahn, L and Dabbish, L. (2004). Labeling images with a computer game. In Proceedings of the SIGCHI conference on Human factors in computing systems. ACM, 319–326.

Wood, E, Baltrušaitis, T, Morency, L.-P, Robinson, P, and Bulling, A. (2016). Learning an appearance-based gaze estimator from one million synthesised images. In Proceedings of the Ninth Biennial ACM Symposium on Eye Tracking Research & Applications. ACM, 131–138.

Downloads

Published

2019-12-10

How to Cite

Gil de Gómez Pérez, D., & Bednarik, R. (2019). POnline: An Online Pupil Annotation Tool Employing Crowd-sourcing and Engagement Mechanisms. Human Computation, 6(1), 176-191. https://doi.org/10.15346/hc.v6i1.9

Issue

Section

Research