An interactive framework for the detection of ictal and interictal activities: Cross-species and stand-alone implementation
Guillermo M Besné 1 , Alejandro Horrillo-Maysonnial 2 , María Jesús Nicolás 1 , Ferran Capell-Pascual 1 , Elena Urrestarazu 3 , Julio Artieda 4 , Miguel Valencia 5
Background and objective: Despite advances on signal analysis and artificial intelligence, visual inspection is the gold standard in event detection on electroencephalographic recordings. This process requires much time of clinical experts on both annotating and training new experts for this same task. In scenarios where epilepsy is considered, the need for automatic tools is more prominent, as both seizures and interictal events can occur on hours- or days-long recordings. Although other solutions have already been proposed, most of them are not integrated on clinical and basic science environments due to their complexity and required specialization.
Here we present a pipeline that arises from coordinated efforts between life-science researchers, clinicians and data scientists to develop an interactive and iterative workflow to train machine-learning tools for the automatic detection of electroencephalographic events in a variety of scenarios.
Methods: The approach consists on a series of subsequent steps covering data loading and configuration, event annotation, model training/re-training and event detection. With slight modifications, the combination of these blocks can cope with a variety of scenarios.
To illustrate the flexibility and robustness of the approach, three datasets from clinical (patients of Dravet Syndrome) and basic research environments (mice model of the same disease) were evaluated. From them, and in response to researchers' daily needs, four real world examples of interictal event detection and seizure classification tasks were selected and processed.
Results: Results show that the current approach was of great aid for event annotation and model development. It was capable of creating custom machine-learning solutions for each scenario with slight adjustments on the analysis protocol, easily accessible to users without programming skills. Final annotator similarity metrics reached values above 80% on all cases of use, reaching 92.3% on interictal event detection on human recordings.
Conclusions: The presented framework is easily adaptable to multiple real world scenarios and the interactive and ease-to-use approach makes it manageable to clinical and basic researches without programming skills. Nevertheless, it is conceived so data scientists can optimize it for specific scenarios, improving the knowledge transfer between these fields.