Countdown to Conference


Browse ATS 2021 Abstracts

HomeProgram ▶ Browse ATS 2021 Abstracts

ATS 2021 will feature presentations of original research from accepted abstracts. Mini Symposia and Thematic Poster Sessions are abstract based sessions.

Please use the form below to browse scientific abstracts and case reports accepted for ATS 2021. Abstracts presented at the ATS 2021 will be published in the Online Abstract Issue of the American Journal of Respiratory and Critical Care Medicine, Volume 203, May 3, 2021.

Search Tips:

  • Use the keyword search to search by keyword or author's name.
  • Filter your search results by selecting the checkboxes that apply.
  • Click on "Clear" to clear the form and start a new search. .

Search results will display below the form.

Artificial Intelligence Assists in Quality Assessment of Spirometry in Clinical Trials

Session Title
A4606 - Artificial Intelligence Assists in Quality Assessment of Spirometry in Clinical Trials
Author Block: E. Topole1, S. Biondaro1, I. Montagna1, S. Corre2, K. Ray3, N. Das4, M. Topalovic3; 1Chiesi Farmaceutici S.p.A, Parma, Italy, 2Chiesi SAS, Bois Colombes, France, 3ArtiQ, Leuven, Belgium, 4Laboratory of respiratory medicine and thoracic surgery, KU Leuven, Leuven, Belgium.
Rationale Most clinical trials involving pharmacological interventions for respiratory disorders use FEV1 or FVC as an endpoint. Therefore, the quality of spirometry manoeuvres is of utmost importance for the reliability of these indices. While the ATS/ERS manoeuvre acceptability criteria include quantifiable recommendations, they also require experienced technicians to subjectively evaluate whether spirometry is free from artefacts and acceptable, leading to high inter-technician variability. To ensure data quality and consistency in clinical trials, experienced over-readers (OR) review all spirometries. In this retrospective analysis, we explored the value of artificial intelligence (AI)-based quality control (QC) software compared to the OR, to ensure spirometry quality in clinical trials.
Methods In this non-interventional retrospective analysis, a random selection of 1000 unique spirometry sessions (N=4085 curves) from the Chiesi COPD clinical trials database was made. An OR label indicated if a manoeuvre was acceptable to provide a reliable measurement of endpoints. The standard version of the AI software (ArtiQ.QC) was based on a deep-learning model (Das, ERJ 2020) that determined the acceptability of a manoeuvre using ATS/ERS 2005 guidelines in the clinical practice setting. Primarily, the performance of QC software against OR labels was assessed. Subsequently, the software was re-calibrated on 80% of the dataset and tested on the remaining 20% (‘test set’, N=818 curves). Additionally, two respiratory physicians manually reviewed a subset of curves from the test set, comprising all curves where ArtiQ.QC and OR labels differ (N=109, 13% of the test set) and a sample of curves where ArtiQ.QC and OR labels match (N=26).
Results The agreement between the standard version of ArtiQ.QC and OR labels was 85%, with high sensitivity (94%) and positive predictive value (PPV=89%). This agreement was slightly improved after recalibration (87%), measured in the test set, with 96% sensitivity and 89% PPV. In the manual review, when ArtiQ.QC and OR labels match (N=26), the respiratory physicians agree in 88% (N=23). When ArtiQ.QC and OR labels differ, the physicians agree with ArtiQ.QC in 60% (N=65). Using the labels from physician review to change the OR labelling, the accuracy of the recalibrated QC software in the test set was 94%, with 100% sensitivity and 94% PPV.
Conclusion AI software results in a quality control comparable to manual OR review in clinical trials. By providing immediate and consistent results, using AI may ensure the highest quality consistency of spirometry evaluation in a timely manner with beneficial impacts to clinical trial conduct and outcomes.