HealthNews

AI system spots Parkinson’s signs in voice, walking and drawings


In an evolving health landscape, emerging research continues to highlight concerns that could impact everyday wellbeing. Here’s the key update you should know about:

By merging voice instability, gait asymmetry, and tremor-driven handwriting changes into a single explainable AI framework, researchers show how digital biomarkers can move Parkinson’s detection closer to reliable real-world screening.

Study: Explainable multimodal feature fusion networks for Parkinson’s disease prediction. Image credit: goodluz/Shutterstock.com

Recent advances in computing, especially the use of artificial intelligence, hold promise for increased accuracy and efficiency of medical diagnosis. A recent study published in the journal Frontiers in Digital Health presents a deep learning approach that uses multiple modalities of input data to improve the detection of Parkinson’s disease.

Digital biomarkers aim to catch early Parkinson’s Disease

Parkinson’s disease (PD) is a progressive neurodegenerative disorder. It manifests as motor impairments, including tremor, rigidity, gait abnormalities, handwriting difficulties, and slowed movement. It also presents with impaired cognition, problems with speech, and sleep issues. PD diagnosis is primarily clinical, based on neurological examination. The subjective nature of this process may increase the risk of misdiagnosis or of missed diagnosis, especially in early disease.

Artificial intelligence (AI) can help overcome these limitations by analyzing handwriting, gait, and speech for telltale signs of early dysfunction. These objectively measured digital biomarkers can help detect PD at early stages. AI-driven speech analysis has achieved up to 99 % accuracy in controlled datasets. Similarly, gait-based analytics can discriminate between PD patients and healthy controls with up to 97 % accuracy. Handwriting analysis has also achieved nearly 98 % accuracy.

Despite this, each of these has significant problems when applied to the clinical context. For instance, speech analysis may be confounded by differences in accent, language, or background noise. Similar quality issues plague gait-based and handwriting-based detection systems. The former relies heavily on the proper use of high-quality sensors, whereas handwriting analysis is often based on experiments performed in controlled rather than real-world conditions. Thus, these unimodal systems are poorly generalizable and cannot be easily scaled.

See also  Vitamin A may be helping cancer hide from the immune system

AI models are also often poorly interpretable; they offer predictions but do not explain the reasoning that drives how and why decisions are made. This has led to the introduction of explainability mechanisms, exemplified in this case by SHapley Additive exPlanations (SHAP), Gradient-weighted Class Activation Mapping (Grad-CAM), and Integrated Gradients (IG). When integrated into PD models, these allow clinicians to understand which attributes influenced the decision-making process. Their relatively limited use has slowed the growth of clinical support for AI-based detection systems.

The current study sought to overcome these obstacles by using a multimodal deep learning framework that incorporates three modalities: gait, handwriting, and speech. This approach integrates complementary findings from multiple modalities, representative of the wide range of PD clinical features, into a single prediction. If one modality is unreliable or noisy, the other two may help strengthen overall classification performance.

Even so, explainability has lagged behind in multimodal frameworks, making them unpopular in clinical practice. In view of this gap, the researchers present a static early-feature fusion system. The model combines modality-specific features via feature concatenation, followed by XGBoost classification, thereby optimizing overall prediction performance. In addition, the model includes SHAP, Grad-CAM, and Integrated Gradients to ensure interpretability.

Inside the trimodal early fusion architecture

In this model, deep neural networks were used to process individual modalities via dedicated feature-extraction pipelines. For speech, log-Mel spectrograms were analyzed using EfficientNet-B0; for gait, temporal convolutional networks and autoencoders were used to extract vertical ground reaction force features; and for handwriting, spiral drawings were processed using ResNet-50. This was followed by static feature concatenation and classification with an XGBoost model. Explainable AI techniques were employed to make the model interpretable at both modality and feature levels.

See also  ‘Oh my God, this is our Tanzania’: CNN investigation shows police fatally shooting protesters, signs of mass graves

For speech analysis, log-Mel spectrogram representations were used to capture vocal instability, pitch variation, and spectral features associated with PD. Using multiple vocal parameters improved prediction performance. Similarly, wearable sensor–derived gait signals, specifically vertical ground reaction force data from a public PhysioNet dataset, were analyzed to capture stride irregularities, asymmetry, and temporal instability.

For handwriting analysis, digitized spiral drawings were used to detect tremor-induced deviations, curvature changes, and micrographia. Grad-CAM visualizations highlighted regions of the spiral most influential in classification decisions.

Importantly, unlike several studies cited in the literature review, this framework did not incorporate cerebrospinal fluid biomarkers, neuroimaging, olfactory testing, sleep data, facial movement analysis, or finger-tapping assessments. The proposed system relied exclusively on speech, gait, and handwriting datasets.

Benchmark datasets validate multimodal performance

The system was evaluated using publicly available benchmark datasets: a spiral handwriting dataset (3,264 samples), the MDVR-KCL speech dataset (approximately 73 subjects), and the GAITPDB gait dataset (approximately 168 subjects). Fivefold stratified cross-validation was employed to ensure robust evaluation.

The trimodal fusion model achieved an accuracy of 92 %, outperforming unimodal handwriting (91 %), gait (90 %), and speech (74 %) models. It achieved a macro F1-score of 0.89, an area under the ROC curve (AUC) of 0.95, and an average precision of 0.96, with balanced sensitivity and specificity of approximately 90 % and 89 %, respectively.

In simpler terms, the combined model correctly classified roughly nine out of ten cases while maintaining a good balance between identifying people with Parkinson’s disease and avoiding false alarms.

Bootstrapped confidence intervals further supported the statistical robustness of these results. External validation experiments demonstrated similar classification patterns, although with slight performance variation attributable to dataset differences.

See also  NASA’s Webb Just Found a Hidden Moon Factory Beyond Our Solar System

The model performed better than unimodal systems and provided an interpretable AI-assisted framework. However, the fusion mechanism involved static concatenation rather than adaptive or reliability-based dynamic weighting, and the study did not experimentally simulate missing-modality scenarios. The authors also emphasize that while multimodal fusion improved robustness, performance was evaluated retrospectively on benchmark datasets rather than in prospective clinical trials.

Explainable AI strengthens Parkinson’s screening potential

The study presents a diagnostic system based on multimodal feature fusion modeling, using AI, demonstrating solid performance and interpretability on benchmark datasets.

However, the authors acknowledge important limitations. The framework has not yet undergone prospective clinical validation, was evaluated only for binary classification (PD versus healthy controls), and did not include clinician-guided assessment of the explainability of its outputs. Additionally, modality-specific generalizability challenges remain, particularly for speech and gait data collected under different real-world conditions.

Future studies should involve neurologists and longitudinal analyses to establish the clinical validity of this framework, build trust, and ensure regulatory readiness. Lighter, deployment-oriented versions of the model, along with more adaptive multimodal fusion strategies, may further enhance real-world applicability.

Download your PDF copy by clicking here.


Source link

Digit

Digit is a versatile content creator with expertise in Health, Technology, Movies, and News. With over 7 years of experience, he delivers well-researched, engaging, and insightful articles that inform and entertain readers. Passionate about keeping his audience updated with accurate and relevant information, Digit combines factual reporting with actionable insights. Follow his latest updates and analyses on DigitPatrox.
Back to top button
close