PLoS ONE (Jan 2020)
Transforming assessment of speech in children with cleft palate via online crowdsourcing.
Abstract
OBJECTIVE:Speech intelligibility is fundamental to social interactions and a critical surgical outcome in patients with cleft palate. Online crowdsourcing is a burgeoning technology, with potential to mitigate the burden of limited accessibility to speech-language-pathologists (SLPs). This pilot study investigates the concordance of online crowdsourced evaluations of hypernasality with SLP ratings of children with cleft palate. METHODS:Six audio-phrases each from children with cleft palate were assessed by online crowdsourcing using Amazon Mechanical Turk (MTurk), and compared to SLP's gold-standard hypernasality score on the Pittsburgh Weighted Speech Score (PWSS). Phrases were presented to MTurk crowdsourced lay-raters to assess hypernasality on a Likert scale analogous to the PWSS. The survey included clickable reference audio samples for different levels of hypernasality. RESULTS:1,088 unique online crowdsourced speech ratings were collected on 16 sentences of 3 children with cleft palate aged 4-8 years, with audio averaging 6.5 years follow-up after cleft palate surgery. Patient 1 crowd-mean was 2.62 (SLP rated 2-3); Patient 2 crowd-mean 2.66 (SLP rated 3); and Patient 3 crowd-mean 1.76 (SLP rated 2). Rounded for consistency with PWSS scale, all patients matched SLP ratings. Different sentences had different accuracies compared to the SLP gold standard scores. CONCLUSION:Online crowdsourced ratings of hypernasal speech in children with cleft palate were concordant with SLP ratings, predicting SLP scores in all 3 patients. This novel technology has potential for translation in clinical speech assessments, and may serve as a valuable screening tool for non-experts to identify children requiring further assessment and intervention by a qualified speech language pathology expert.