Citizen Science: Theory and Practice (Jul 2019)

Using iNaturalist in a Coverboard Protocol to Measure Data Quality: Suggestions for Project Design

  • Julie Wittmann,
  • Derek Girman,
  • Daniel Crocker

DOI
https://doi.org/10.5334/cstp.131
Journal volume & issue
Vol. 4, no. 1

Abstract

Read online

We evaluated the accuracy of data records generated by citizen scientist participants using iNaturalist in a coverboard sampling scheme, a common method to detect herpetofauna, involving 17 species of amphibians and reptiles. We trained and observed 10 volunteers working over an eight-month period at a study site in Sonoma County, California, USA. A total number of 1,169 observations were successfully uploaded to the iNaturalist database by volunteers including a new locality for Black Salamander. Volunteers were generally successful in collecting and uploading data for verification but had more difficulties detecting smaller, camouflaged salamanders and photographing quick-moving lizards. Errors associated with uploading smartphone data were reduced with volunteer experience. We evaluated all observations and determined that 82% could be verified to the species level based on the photograph. Forty-five percent of the observations made it to the iNaturalist “research grade” quality using their crowdsourcing tools. Independently we evaluated the crowdsourcing accuracy of the research grade observations and found them to be 100% accurate to the species level. A variety of factors (herpetofauna group, species, photograph quality) played a role in determining whether observations were elevated to research grade. Volunteer screening and training protocols emphasizing smartphones that have appropriate battery and data storage capacity eliminated some issues. Our results suggest that a variety of factors can help scientists and resource managers improve the experiences and quality of the data in citizen science biodiversity programs.

Keywords