Increase the breadth and depth of clinical assessment quality reviews in clinical trials. We provide fast, reliable quality indicators of 100% of clinical assessments administered, including reviews of administration and scoring.
Work with usSome of the most costly errors in clinical trials come from poorly administered and scored clinical interviews. Winterlight’s speech and language analysis platform quickly identifies potential quality issues to allow for faster site and rater remediation.
Our QA pipeline reviews and flags poorly administered or scored clinical interviews across a variety of cognitive tests and clinician reported outcomes.
We also detect scoring errors by automated analysis of tasks, such as word recall, serial seven subtraction, category fluency, or more complex items like word finding difficulty.
We can run quality reviews on 100% of the clinical assessments conducted in a trial. We can provide reviews for the following assessments:
We automatically extract scores and quality indicators from recordings, like:
We can flag whole assessments that require expert review.
We can point reviewers to the subsection of the assessment requiring review.
We provide reviewers with assessment transcripts, making reviews faster and more thorough.
Automated analysis of rater and participant speech includes pacing of the rater, latency to respond, amount of speech, rater identity across visits, and more
Review of clinical content to identify administration issues, including adherence to clinical standards, clinician interruptions, or skipped segments
Automated clinical scores based on machine learning models, including word recall scores, word finding difficulty, and spoken language ability
Natural language processing tools can be used to automate and standardize the scoring of clinical assessments. Many cognitive assessments used as endpoints in Alzheimer’s disease (AD) trials require manual scoring and review which can be costly and time consuming. Developments in natural language processing technology can be leveraged to develop automated and objective tools to generate text transcripts and produce scores for cognitive assessments. As a proof of concept, we tested an automated method to score the word recall portion of the ADAS-Cog, a standard endpoint in AD research. In this study, we found that preconfigured automated systems approached human accuracy, although still tended to underestimate scores due to transcription errors. Future work to refine the use of ASR to evaluate clinical endpoints includes optimizing ASR accuracy by filtering noise before processing samples and further customizing language models to suit the datasets at hand, as well as exploring the use of ASR in other elements of cognitive assessments, to provide more efficient and scalable scoring methods.
Winterlight Labs
100 King Street West
1 First Canadian
Place
Suite 6200, P.O. Box 50
Toronto ON M5X 1B8