Application of Machine Learning for Automating Behavioral Tracking of Captive Bornean Orangutans (<i>Pongo Pygmaeus</i>)
Frej Gammelgård,
Jonas Nielsen,
Emilia J. Nielsen,
Malthe G. Hansen,
Aage K. Olsen Alstrup,
Juan O. Perea-García,
Trine H. Jensen,
Cino Pertoldi
Affiliations
Frej Gammelgård
Department of Chemistry and Bioscience, Aalborg University, Frederik Bajers Vej 7H, 9220 Aalborg, Denmark
Jonas Nielsen
Department of Chemistry and Bioscience, Aalborg University, Frederik Bajers Vej 7H, 9220 Aalborg, Denmark
Emilia J. Nielsen
Department of Chemistry and Bioscience, Aalborg University, Frederik Bajers Vej 7H, 9220 Aalborg, Denmark
Malthe G. Hansen
Department of Chemistry and Bioscience, Aalborg University, Frederik Bajers Vej 7H, 9220 Aalborg, Denmark
Aage K. Olsen Alstrup
Department of Nuclear Medicine & PET, Aarhus University Hospital and Department of Clinical Medicine, Aarhus University, Palle Juul Jensens Boulevard 99, 8000 Aarhus, Denmark
Juan O. Perea-García
Faculty of Social and Behavioural Sciences, Leiden University, 2333 Leiden, The Netherlands
Trine H. Jensen
Department of Chemistry and Bioscience, Aalborg University, Frederik Bajers Vej 7H, 9220 Aalborg, Denmark
Cino Pertoldi
Department of Chemistry and Bioscience, Aalborg University, Frederik Bajers Vej 7H, 9220 Aalborg, Denmark
This article applies object detection to CCTV video material to investigate the potential of using machine learning to automate behavior tracking. This study includes video tapings of two captive Bornean orangutans and their behavior. From a 2 min training video containing the selected behaviors, 334 images were extracted and labeled using Rectlabel. The labeled training material was used to construct an object detection model using Create ML. The use of object detection was shown to have potential for automating tracking, especially of locomotion, whilst filtering out false positives. Potential improvements regarding this tool are addressed, and future implementation should take these into consideration. These improvements include using adequately diverse training material and limiting iterations to avoid overfitting the model.