HealthNews

AI insights into kids’ eating and obesity risk


In an evolving health landscape, emerging research continues to highlight concerns that could impact everyday wellbeing. Here’s the key update you should know about:

By teaching artificial intelligence to spot every bite a child takes, scientists are revealing hidden eating patterns that could transform how we prevent obesity from the dinner table outward.

Study: ByteTrack: a deep learning approach for bite count and bite rate detection using meal videos in children. Image credit: Andrii Spy_k/Shutterstock.com

Eating behaviors shed light on the risk for overconsumption and obesity. A new study published in the journal Frontiers in Nutrition presents a deep learning system to analyze bite behavior among children, using videos that record children’s meals.

Introduction

Meal microstructure describes the various behaviors that occur during a bout of eating: bites, chews, bite rate, and bite size. Analyzing meal microstructure helps to identify individual eating patterns and their variations across a spectrum of food types and uncover the mechanisms that underlie eating disorders and obesity.

Children who develop obesity are more likely to take larger bites and eat faster, both of which increase the amount of food consumed. Preventive interventions could be tailored using observed meal microstructure, providing a novel means of curbing this epidemic.

The gold standard for analyzing bite and microstructure is manual observational coding, which involves manually viewing video recordings of children’s eating behaviors and annotating them with timestamps. Though very reliable and accurate, this method is labor-intensive and requires large amounts of time, in addition to being costly.

Compared to manual coding, automated bite detection systems could be much more efficient and scalable. However, these mostly use adult data from acoustic sensors and accelerometers, based on preset motion limits. Such sensors may misinterpret drinking, or gesturing, for instance, as bites.

Again, various methods of eating (with spoons, chopsticks, or by hand) could cause issues with detection by increasing the difficulty of the act. Moreover, the wide variability of the act itself makes it difficult to automate its detection across different settings.

See also  Study: Common heart attack drug doesn't work, may raise risk of death for some women

This has led to the use of automated platforms to detect bites. These platforms may use location-based criteria (hand-face distance, mouth opening) or optical flow methods to track movements across successive frames. However, they cannot reliably distinguish eating behavior from other movements that are especially common in children.

This has prompted interest in deep learning methods using convolutional neural networks (CNNs), mostly trained and tested on tightly controlled video recordings of eating, often by adults. In the real world, such videos are uncommon; the norm is poor lighting and differences in eating movements. Deep learning technology could help overcome interpretation difficulties caused by such artefacts.

About the study

ByteTrack is a deep learning system that uses video-recorded child meals to find the bite count and bite rate. It was trained on 242 videos (1440 minutes) recorded from 94 children aged 7-9, who each completed four meal sessions one week apart. A 52-video subset was used to train the face detection component of the system. The videos were augmented to introduce real-world-like changes in the recording conditions.

For the video recording, the children ate four meals, one week apart, comprising the same food but in different amounts. The system works in two stages. The first stage is used for face detection, locking on the face of the target child while ignoring other people and objects.

Two systems were used for this purpose, one focusing on rapid face recognition and the other on recognition in challenging situations when the face is partly blocked. The combination thus aims to achieve efficient and accurate face detection.

See also  Public health experts worry about the proposal to split the MMR vaccine. A doctor explains why

The second uses this clean data to distinguish bite activity from other movements. For this purpose, an EfficientNet convolutional neural network (CNN) was combined with a long short-term memory (LSTM) recurrent network. The model adjusted for blur, low light, change in orientation, rotation, camera shake, and hands or utensils blocking the view of the mouth. The results obtained by the model were compared against manual observational coding.

Study findings

ByteTrack testing showed high accuracy of recall and precision, at >98%. This showed that the technology balanced speed with the ability to tolerate variable visual appearances related to the bite behavior.

The second stage showed moderate performance in bite detection, achieving on average 79% precision, 68% recall, and an F1 score of ~71%. There was an overall overcounting of bites, especially during the early part of the meal. Longer eating sessions or the later part of the meal tended to be associated with undercounting bites.

The reasons include rapid biting and falsely increasing bite detection. Later, children begin to lose interest in the food, which could produce more movement, including those that block the mouth, reducing bite detection.

It had an intraclass correlation coefficient (ICC) of 0.66 with the gold-standard coding, though videos where the child moved too much or where hands or utensils blocked the mouth were less reliable. Even so, ByteTrack reflects real-world situations more accurately, with other people present while the child ate (around 80% of the recorded meals included additional people to simulate natural mealtime environments).

It is less intrusive than other wearable sensors mounted on eyeglasses or bite counter watches that must be switched on and off, potentially disrupting the natural flow of the eating process. Though ByteTrack must be started and stopped manually, it is not yet optimized for real-time bite detection. Still, it remains less intrusive and closer to naturalistic observations than wearable systems.

See also  Live updates: Israel begins ground offensive in Gaza City, as UN commission says Israel is committing genocide

Smartphone cameras could be used for natural recording in the future, and combined with platforms like ByteTrack, provided data privacy can be ensured. The time and effort saved by such technological applications is enormous, indicating a vast need for their development. In addition, these eliminate sources of human error like fatigue, inexperience, and misinterpretation by using the same criteria to interpret all videos. Further enhancement is needed before such platforms are available for real-time use.

Conclusions

This pilot study demonstrates the feasibility of a scalable, automated tool for bite detection in children’s meals.”

ByteTrack is the first automated system specifically developed to analyze pediatric eating behavior, and its moderate success is encouraging.

The limitations of this method were apparent, and newer techniques need to be devised to increase reliability in the presence of occlusions or with high movement. Future work is required to make the platform more robust across different populations and under different recording situations.

Download your PDF copy now!


Source link

Back to top button
close