Papers - THI THI ZIN
-
Hybrid Embedded Feature Matching for Robust Dairy Cow Identification Using 3D Point Cloud Reviewed International journal
Pyae Phyo Kyaw, Pyke Tin, M. Aikawa, I. Kobayashi, Thi Thi Zin
Institute of Electrical and Electronics Engineers Inc., Conference Proceedings (ICCE-Taiwan 2026) 2026.7
Authorship:Last author, Corresponding author Language:English Publishing type:Research paper (international conference proceedings)
-
Placement-Free Multi-Camera Monitoring Using Skeletal and Spatial Information Reviewed International coauthorship International journal
Remon Nakashima, Thi Thi Zin, Wan-Jung Chang, Shinji Watanabe
Institute of Electrical and Electronics Engineers Inc., Conference Proceedings (ICCE-Taiwan 2026) 2026.7
Authorship:Corresponding author Language:English Publishing type:Research paper (international conference proceedings)
-
A Video-Based Framework for Non-Contact Neonatal Movement Analysis in Clinical Environments Reviewed International journal
Hiroki Matsumoto, Remon Nakashima, Thi Thi Zin, Yuki Kodama
Institute of Electrical and Electronics Engineers Inc., Conference Proceedings (ICCE-Taiwan 2026) 2026.7
Authorship:Corresponding author Language:English Publishing type:Research paper (international conference proceedings)
-
Non-contact Monitoring of Dystocia in Dairy Cows Using Keypoint Detection and Semantic Segmentation Reviewed International journal
T. Murayama, Thi Thi Zin, I. Kobayashi, M. Aikawa
The 2026 IEEE International Conference on Consumer Technology – Pacific (ICCT-Pacific 2026) 2026.3
Authorship:Corresponding author Language:English Publishing type:Research paper (international conference proceedings) Publisher:IEEE
In the dairy industry, labor shortages and the economic losses caused by calving accidents are significant issues. To address these problems, we propose a non-contact monitoring system using 360-degree cameras and deep learning techniques. This study focuses on constructing an automated workflow that detects cows, estimates their poses (standing or lying), and tracks individuals without attaching sensors to the animals. We employed YOLO11 for cow detection and keypoint extraction, and compared three models for pose estimation: Multilayer Perceptron (MLP), Gated Recurrent Unit (GRU), and Semantic Segmentation (Deeplabv3+). The experimental results showed that YOLO11 achieved a high detection accuracy (mAP@0.50: 99.47%) for bounding boxes. For pose estimation, the semantic segmentation approach with a ResNet101 backbone achieved the highest accuracy of 85.1%, outperforming keypoint-based methods. These results demonstrate the potential of the proposed system for basic behavioral monitoring in calving barns.
-
A Study on Supporting Neurocognitive Disorder Assessment for Deaf Individuals Using a Sign Language Recognition System Reviewed International journal
N. Shibahara, Thi Thi Zin, S. Ito, N. Takahashi, N. Takemoto
The 2026 IEEE International Conference on Consumer Technology – Pacific (ICCT-Pacific 2026) 2026.3
Authorship:Corresponding author Language:English Publishing type:Research paper (international conference proceedings) Publisher:IEEE
The Mini Mental State Examination (MMSE) is widely used for screening Neurocognitive Disorder (NCD); however, ensuring diagnostic accuracy for Deaf individuals remains a challenge due to factors such as the potential subjectivity and translation errors introduced by sign language interpreters. To address this issue, this study proposes an automated MMSE scoring system employing Japanese Sign Language (JSL) recognition based on skeletal keypoints. The proposed method utilizes MediaPipe Pose and Hands to extract feature points from examination videos and employs a Long Short-Term Memory (LSTM) model to classify sign language responses. Evaluation results using 5-fold cross-validation on a dataset of Deaf individuals demonstrated a high average classification accuracy of 92.75%. Furthermore, the system successfully performed automated scoring compliant with the MMSE protocol. These results indicate that the proposed system can enable objective cognitive assessment without interpreter intervention, thereby contributing to more accurate diagnoses for Deaf individuals.
-
Computer vision in precision livestock farming: AI-driven technologies and applications for sustainable animal production Invited Reviewed
Thi Thi Zin, Pyke Tin
Animal bioscience 39 ( 4 ) 260165 - 0 2026.3
Authorship:Lead author, Corresponding author Language:English Publishing type:Research paper (scientific journal) Publisher:Asian Australasian Association of Animal Production Societies
The growing global demand for animal-derived food products is placing unprecedented pressure on livestock production systems to improve efficiency while also assuring animal welfare, environmental sustainability and economic viability. Precision livestock farming (PLF) has emerged as a transformative paradigm that integrates advanced sensing technologies, computer vision, internet of things infrastructures and artificial intelligence (AI) to enable continuous, automated and individualized animal monitoring. This paper explores the evolution of livestock management from conventional observationbased practices to sophisticated, data-driven architecture. It also synthesizes recent advancements in PLF emphasizing its system architecture, key applications in cattle production, cross-sector expansion and emerging challenges. The core architecture of PLF is structured into three functional layers: (i) data acquisition through multi-modal sensors, with a primary emphasis in this review on visual and environmental monitoring system; (ii) data analytics employing machine learning and deep learning techniques to establish behavioral and physiological baselines; and (iii) decision-support mechanisms that translate analytics into actionable farm management interventions. Major applications, including individual animal identification, body condition score estimation, lameness detection, calving time prediction and AI-powered health monitoring, are critically discussed. The extension of PLF principles to aquaculture and other livestock sectors is also discussed. By shifting from herd-level to individual-animal management, PLF provides a scalable, noninvasive approach for early disease detection, optimized resource utilization, improved welfare standards and long-term economic sustainability. The current limitations, including high capital investment, data interoperability challenges and model generalizability constraints, have been analyzed and future research directions emphasizing explainable AI and welfare-oriented system design have been proposed. Overall, PLF represents a systemic transformation of animal agriculture, allowing for data-driven, sustainable and welfarecentered production systems.
DOI: 10.5713/ab.260165
-
Cattle lameness detection using depth image and deep learning Reviewed International journal
San Chain Tun, Pyke Tin, M. Aikawa, I. Kobayashi, Thi Thi Zin
Scientific reports 16 ( 1 ) 2026.3
Authorship:Last author, Corresponding author Language:English Publishing type:Research paper (scientific journal)
Lameness in cattle is a significant welfare and economic concern. To address this, we developed an end-to-end deep learning framework for 24/7 lameness monitoring using top-down depth images of cattle. The framework integrates three key stages: instance segmentation for detection, a custom multi-object tracking algorithm for identity preservation, and a spatio-temporal model for classification. We compared multiple instance segmentation models (Mask R-CNN, YOLOv8m-seg, YOLOv11m-seg) and evaluated three proposed tracking algorithms version1, 2 and 3 (PTAV1, PTAV2, and PTAV3). For classification, we tested multiple configurations integrating various pre-processing conditions (no filter, Gaussian, median), seven EfficientNet backbones (B1-B7), two temporal sequence lengths (5 and 7 frames), and a Long Short-Term Memory (LSTM) network to assign a lameness score from 1 (healthy) to 4 (lame) based on expert ground truth. In the detection model comparison, the YOLOv11m-seg model emerged as the top performer for detection, achieving a BBox AP@50 of 99.38%, Mask AP@50 of 99.26%, at 75.49 FPS. Our proposed tracking algorithm, PTAV3, which leverages location and direction prediction, achieved an exceptional overall accuracy of 99.94% (95% CI: 99.7–100%). For classification, the best model—an EfficientNet-B7 + LSTM architecture—yielded an accuracy of 95.95% (95% CI: 94.8–97.1%) and an F1-score of 96.06% (95% CI: 94.8–97.1%) on unseen test data, using a 5-frame sequence with no pre-processing filter. This integrated system provides a robust, automated, and objective solution for lameness scoring, showcasing the potential for real-time animal welfare monitoring in agricultural settings.
-
Signal-based feature analysis of behavioral trajectories for predicting calving time and classifying assistance needs Reviewed International journal
Wai Hnin Eaindrar Mg, Pyke Tin, M. Aikawa, K. Honkawa, Y. Horii, Thi Thi Zin
Computers and Electronics in Agriculture 243 2026.3
Authorship:Last author, Corresponding author Language:English Publishing type:Research paper (scientific journal) Publisher:Elsevier B.V.
Accurately predicting calving time and recognizing when a cow needs help during delivery are essential for effective livestock management. These factors directly influence animal welfare, how labor is distributed on the farm, and overall productivity. Without close monitoring, calving complications can lead to serious health issues or even death for the cattle. Moreover, delayed assistance during difficult births (dystocia) can significantly harm both the cow and the calf. These problems remain challenging due to the subtle and highly variable nature of cattle behavior, especially within large-scale farming environments where continuous manual monitoring is impractical. This research proposes a fully vision-based, non-invasive system that relies solely on cattle trajectory data derived from images to address these challenges. To analyze signal-based behavioral trajectories associated with calving, we applied three signal-based image processing techniques aimed at predicting calving time and identifying individuals likely to require human assistance during parturition. Our system allows for continuous, automated monitoring using four surveillance cameras eliminating the need for wearable sensors or invasive equipment. We employed three analytical approaches such as amplitude analysis, frequency analysis, and power spectral density analysis (PSD) to interpret cattle movement patterns from camera-derived trajectory data. For predicting calving time, our system achieved 100 % accuracy across all methods. Specifically, the amplitude analysis predicted calving within 9 h, the frequency analysis provided predictions within 5 h, and the PSD analysis predicted calving within 6 h. Moreover, in classifying cattle requiring human assistance during parturition, our system achieved accuracy of 60 %, 60 %, and 65 % for the amplitude, frequency, and PSD analyses, respectively. Unlike conventional methods that rely on wearable sensors, manual observation, or AI models requiring extensive training, our prediction system operates without any model training phase, instead directly analyzing motion patterns from trajectory data to generate predictions. This makes our prediction simpler, more interpretable, and highly scalable, offering a practical and robust solution for improving livestock monitoring and timely intervention in modern farming environments. This work paves the way for further development of automated, non-invasive livestock monitoring technologies.
DOI: 10.1016/j.compag.2025.111301
Other Link: https://www.sciencedirect.com/science/article/pii/S0168169925014073
-
Deep Learning-Driven Intrusion Prediction System Using Ground-Plane Homography in Smart City Dynamic Zones Reviewed International journal
Cho Nilar Phyo, Thi Thi Zin, Pyke Tin
IET Smart Cities 8 ( 1 ) e70024 2026.1
Authorship:Corresponding author Language:English Publishing type:Research paper (scientific journal) Publisher:IET Smart Cities
In smart city environments, public safety increasingly depends on intelligent surveillance systems that can be capable of adapting to dynamic and context-dependent access restrictions. Traditional systems often rely on static and predefined boundaries that fail to respond to rapidly changing environments such as construction sites, public gatherings or emergency situations. This paper introduces a novel deep learning-driven framework using ground-plane homography for real-time proactive intrusion prediction within these dynamically restricted zones (DRZs). Our method first employs deep learning to accurately detect and localise physical restriction markers (e.g., traffic cones). We then utilise ground-plane homography estimation to accurately map these markers into two-dimensional ground-plane perspective, precisely defining the spatial boundaries of the DRZ in real-time. After the reactive detection of restriction markers region, intrusion prediction is achieved through sophisticated human trajectory analysis and future path extrapolation. By forecasting a person's path and identifying projected future presence within the dynamic ground-plane zone, the system assists proactive alerts and adaptive security responses before an actual violation. To the best of our knowledge, this is the first system capable of predicting intrusions into areas dynamically demarcated by visual restriction markers. The experimental results on real-world surveillance datasets demonstrate the system's effectiveness in identifying the presence of humans in DRZ, validating its potential for deployment in smart cities and critical infrastructure.
DOI: 10.1049/smc2.70024
-
GSGOA: Grouped and Scaled Gannet Optimization Algorithm Reviewed International coauthorship
Zhi Li, Shu-Chuan Chu, Thi Thi Zin, Junzo Watada, Jeng-Shyang Pan
Smart Innovation Systems and Technologies 466 SIST 1 - 12 2026.1
Language:English Publishing type:Research paper (international conference proceedings) Publisher:Smart Innovation Systems and Technologies
Swarm intelligence algorithms exhibit remarkable potential in addressing complex optimization problems. Nevertheless, numerous existing approaches, including the Gannet Optimization Algorithm (GOA), encounter difficulties like premature convergence and restricted exploitation capacity during later iterative phases. This study presents a refined variant termed GSGOA, integrating a global best-guided mechanism, a Gaussian - based adaptive grouping strategy, and a Laplace - distributed scaling factor. These enhancements target strengthening the equilibrium between exploration and exploitation, alongside boosting convergence stability. The proposed algorithm undergoes assessment using the CEC 2017 benchmark suite, and experimental findings reveal that GSGOA consistently surpasses classical algorithms such as GOA, SCA, BOA, and WOA in solution precision and robustness.
-
3D Camera-Based Estimation of Cattle Body Weight Reviewed International journal
S. Araki, K. Shiiya, Thi Thi Zin, I. Kobayashi
IEEE Conference Proceedings: 2025 IEEE 14th Global Conference on Consumer Electronics (GCCE) 1196 - 1197 2025.12
Language:English Publishing type:Research paper (international conference proceedings) Publisher:IEEE
Traditional methods for measuring cattle weight require special equipment and often involve physical contact with the animals, increasing the risk of accidents. As the workload for dairy farmers grows due to a decreasing workforce, there is a strong need for safer and more efficient solutions. In this study, we propose a contactless method to estimate cattle weight using a depth camera. This study differentiates itself from other studies by placing the camera above the cow, making it more versatile. We extracted depth images and calculated key body measurements: height, body length, and belly width. Based on these values, we created a regression formula to estimate weight. Our results show that it is possible to estimate cattle weight roughly using only the values obtained from depth images. This method reduces the risk of injuries during measurement and offers a more efficient way to manage cattle health and nutrition.
-
A Conceptual Framework for Neonatal Motor Activity Monitoring Using Digital Twin Technology and Computer Vision: A Preliminary Study Reviewed International journal
R. Nakashima, H. Matsumoto, Thi Thi Zin, Y. Kodama
IEEE Conference Proceedings: 2025 IEEE 14th Global Conference on Consumer Electronics (GCCE) 1320 - 1321 2025.12
Authorship:Corresponding author Language:English Publishing type:Research paper (international conference proceedings) Publisher:IEEE
Continuous non-contact monitoring of neonatal motor activity in the neonatal intensive care unit (NICU) is crucial for early detection of neurological disorders and for guiding timely clinical interventions. We introduce an infrared-driven skeleton-estimation prototype designed for real-time operation that generates a live virtual "digital twin" of the infant’s posture to support clinician assessment. A deep-learning pose model was fine-tuned on a bespoke infrared key-point dataset, and three motion-quantification filters were evaluated: raw differencing (Method A), center-aligned suppression (Method B), and a newly proposed skeleton template-matching filter (Method C). Tests on a life-sized neonatal mannequin confirmed centimetric joint-localization accuracy, reliable detection of 50-pixel hand displacements, and reduction of simulated camera-shake artifacts to within five pixels. Building on these results, a follow-up evaluation on pre-term neonates showed that Method C suppressed static key-point noise by 78 % while preserving physiological motion. This combined mannequin and in-vivo evidence demonstrates the clinical feasibility of our infrared digital-twin framework and establishes a foundation for automated assessment of pre-term motor development.
-
Enhanced Multi-Person Tracking Method Based on ByteTrack Architecture Reviewed International journal
Cho Nilar Phyo, Thi Thi Zin, Pyke Tin
IEEE Conference Proceedings: 2025 IEEE 14th Global Conference on Consumer Electronics (GCCE) 1034 - 1035 2025.12
Language:English Publishing type:Research paper (international conference proceedings) Publisher:IEEE
This paper presents an enhanced tracking method built upon the ByteTrack architecture for efficient and robust multi-person tracking in challenging environments. The proposed system enhances ByteTrack with improved track management strategies, especially for solving the ID increasing issue of ByteTrack. Experimental results on multi-person tracking dataset demonstrate superior performance compared to baseline approach. This findings suggest that the proposed enhancements make ByteTrack more suitable for crowded or dynamic scenes where precise person tracking is essential.
-
Deep Sequential Gait Feature Learning for Long-Term Person Re-Identification in Real-World Environments Reviewed International journal
Cho Nilar Phyo, Thi Thi Zin, Pyke Tin
IEEE Conference Proceedings: 2025 9th International Conference on Information Technology (InCIT) 838 - 844 2025.12
Language:English Publishing type:Research paper (international conference proceedings) Publisher:IEEE
This paper presents a novel gait-based framework for long-term person re-identification in real-world environments. Unlike appearance-based methods, which are often sensitive to illumination, clothing changes, and occlusion, our approach leverages gait dynamics captured via dense optical flow and deep feature learning. We integrate ResNet101 for spatial feature extraction and an LSTM network for temporal sequence modeling, enabling robust representation of human walking patterns across extended time periods. The experimental results on gait datasets demonstrate that the proposed system achieves good recognition in term of accuracy, mean Average Precision (mAP) and recall, and stability under challenging real-world conditions, highlighting its potential for surveillance and security applications.
-
An End-to-End Computer Vision Pipeline for Cow Ear Tag Number Recognition Using YOLOv11 and a Hybrid Efficientnet-NRTR Model Reviewed International journal
San Chain Tun, Pyke Tin, M. Aikawa, I. Kobayashi, Thi Thi Zin
IEEE Conference Proceedings: 2025 9th International Conference on Information Technology (InCIT) 860 - 866 2025.12
Authorship:Last author, Corresponding author Language:English Publishing type:Research paper (international conference proceedings) Publisher:IEEE
This study introduces a robust, real-time system for automatically identifying individual cows by reading the four-digit numbers on their ear tags. This system is a key part of precision livestock farming and operates through a multi-stage pipeline designed for accuracy and practical use in a real farm environment. The system's pipeline first uses a YOLOv11 model to detect and segment cow heads and ear tags from video feeds. A custom tracking algorithm then ensures that each cow's identity is maintained even with temporary occlusions or missed detections. This tracker uses a combination of techniques, including Intersection over Union (IoU) and frame-holding logic, to provide persistent identity assignments. Finally, an (Non-Recurrence Sequence-to-Sequence Model for Scene Text Recognition) NRTR-based OCR model with EfficientNet backbones reads the numbers from the cropped ear tag images. This model is trained to recognize digits from 0 to 9 and uses an “x” for any unreadable characters, ensuring reliable number sequence recognition. The system's performance was evaluated on a real-world dataset from a dairy farm. It achieved a high detection and tracking accuracy of 96.18 %. The OCR component also demonstrated strong results, with the most advanced EfficientNet backbone (B7) achieving an impressive 95.38% accuracy. These findings confirm the system's high performance and reliability, offering a scalable and viable solution for automated cows monitoring in operational farm settings.
-
AI-powered visual E-monitoring system for cattle health and wealth Reviewed International journal
Aung Si Thu Moe, Pyke Tin, M. Aikawa, I. Kobayashi, Thi Thi Zin
Smart Agricultural Technology 12 2025.12
Authorship:Last author, Corresponding author Language:English Publishing type:Research paper (scientific journal) Publisher:Elsevier B.V.
The livestock industry is experiencing a major transformation through the integration of artificial intelligence (AI) and advanced visual e-monitoring technologies. This study presents an AI-powered cattle health monitoring system that combines real-time computer vision, edge computing, and mobile applications to enhance animal welfare and farm productivity. The system employs a multi-camera setup, comprising RGB, RGB-D, and ToF depth cameras, strategically deployed across four functional zones of a cattle barn: the milking parlor, return lane, feeding area, and resting space. Through integrated deep learning algorithms, the platform performs key health-related tasks, including ear-tag, body-based, and face-based cattle identification, body condition scoring (BCS), lameness detection, feeding time estimation, and real-time localization. A farm-side desktop application processes live video streams from 22 cameras using multiprocessing, maintaining an average latency of 0.62 s per frame per camera. Captured data are stored in a structured MySQL database and accessed via a RESTful API by a user-side mobile application developed using Flutter and Clean Architecture. Experimental evaluation under continuous 24-hour operation demonstrated the system's stability and effectiveness in delivering actionable insights. Cattle identification achieved high accuracies: ear-tag 94.00 %, face-based 93.66 %, body-based 92.80 %, and body-color point cloud 99.55 %. The BCS prediction and lameness detection modules achieved average accuracies of 86.21 % and 88.88 %, respectively. Feedback from veterinarians and farm personnel during pilot testing confirmed its usability and practical relevance. While current limitations include computational demands and the need for improved model robustness, the proposed system establishes a scalable, non-invasive framework for intelligent livestock monitoring. It aligns with broader Green and Digital Transformation (GX and DX) initiatives toward sustainable smart farming practices.
DOI: 10.1016/j.atech.2025.101300
Other Link: https://www.sciencedirect.com/science/article/pii/S2772375525005313
-
Generating Accurate Activity Patterns for Cattle Farm Management Using MCMC Simulation of Multiple-Sensor Data System. Reviewed International journal
Y. Hashimoto, Thi Thi Zin, Pyke Tin, I. Kobayashi, H. Hama
Sensors (Basel, Switzerland) 25 ( 21 ) 2025.11
Authorship:Corresponding author Language:English Publishing type:Research paper (scientific journal) Publisher:Multidisciplinary Digital Publishing Institute (MDPI)
This paper presents a novel Markov Chain Monte Carlo (MCMC) simulation model for analyzing multi-sensor data to enhance cattle farm management. As Precision Livestock Farming (PLF) systems become more widespread, leveraging data from technologies like 3D acceleration, pneumatic, and proximity sensors is crucial for deriving actionable insights into animal behavior. Our research addresses this need by demonstrating how MCMC can be used to accurately model and predict complex cattle activity patterns. We investigate the direct impact of these insights on optimizing key farm management areas, including feed allocation, early disease detection, and labor scheduling. Using a combination of controlled monthly experiments and the analysis of uncontrolled, real-world data, we validate our proposed approach. The results confirm that our MCMC simulation effectively processes diverse sensor inputs to generate reliable and detailed behavioral patterns. We find that this data-driven methodology provides significant advantages for developing informed management strategies, leading to improvements in the overall efficiency, productivity, and profitability of cattle operations. This work underscores the potential of using advanced statistical models like MCMC to transform multi-sensor data into tangible improvements for modern agriculture.
DOI: 10.3390/s25216781
-
A study on action recognition for the elderly using depth camera Reviewed International journal
R. Nakashima, Thi Thi Zin, H. Tamura, S. Watanabe, E. Chosa
Institute of Electrical and Electronics Engineers Inc., Conference Proceedings (ICCE-Taiwan 2025) 119 - 120 2025.10
Authorship:Corresponding author Language:English Publishing type:Research paper (international conference proceedings) Publisher:ICCE Taiwan 2025 12th IEEE International Conference on Consumer Electronics Taiwan Generative AI in Innovative Consumer Technology Proceedings
In this study, a depth camera-based system is proposed to achieve non-contact, privacy-preserving action recognition using human skeleton recognition. Specifically, human regions are first extracted using bounding box (BB) detection, followed by action recognition based on Keypoint-based pose estimation. The estimated Keypoints capture detailed joint positions, and their structural relationships are modeled with a Graph Convolutional Network (GCN). Furthermore, a Transformer is employed to capture the temporal features of the skeletal data. This Keypoint-centric pipeline differentiates our approach from conventional, silhouette-level methods and significantly enhances the granularity of action recognition.
DOI: 10.1109/ICCE-Taiwan66881.2025.11208107
Other Link: https://ieeexplore.ieee.org/document/11208107
-
Optimizing Network Message Regulations Using AI-Enhanced Dynamic Programming Methods Reviewed International journal
Thi Thi Zin, Tunn Cho Lwin, H. Hama, Pyke Tin
Institute of Electrical and Electronics Engineers Inc., Conference Proceedings (ICCE-Taiwan 2025) 121 - 122 2025.10
Authorship:Lead author, Corresponding author Language:English Publishing type:Research paper (international conference proceedings) Publisher:Icce Taiwan 2025 12th IEEE International Conference on Consumer Electronics Taiwan Generative AI in Innovative Consumer Technology Proceedings
Network message transmission efficiency faces increasing challenges in multi-server systems due to complex traffic patterns and resource allocation demands. This paper presents an AI-enhanced dynamic programming approach for optimizing message flow regulations. By formulating the problem as a Markov Decision Process (MDP) and integrating reinforcement learning techniques, we develop an adaptive framework for network message regulation. Experimental results show our approach achieves 25% reduction in queue length and 30% improvement in resource utilization compared to conventional methods.
DOI: 10.1109/ICCE-Taiwan66881.2025.11208087
Other Link: https://ieeexplore.ieee.org/document/11208087
-
Machine Learning-Based Classification of Umbilical Cord Blood Gas Using Fetal Heart Rate Variability Reviewed International journal
Tunn Cho Lwin, Thi Thi Zin, Pyke Tin, E. Kino, T. Ikenoue
Institute of Electrical and Electronics Engineers Inc., Conference Proceedings (ICCE-Taiwan 2025) 117 - 118 2025.10
Authorship:Corresponding author Language:English Publishing type:Research paper (international conference proceedings) Publisher:Icce Taiwan 2025 12th IEEE International Conference on Consumer Electronics Taiwan Generative AI in Innovative Consumer Technology Proceedings
Fetal heart rate variability (FHRV) is a key indicator of fetal well-being and has potential in predicting umbilical cord blood gas, an essential biomarker for fetal health assessment. Machine learning techniques can enhance fetal pH classification using FHRV features. This study aims to develop a machine learning-based classification model for fetal pH levels, leveraging FHRV data to support early risk detection during childbirth. To achieve this, we classify fetal pH into two categories using Mahalanobis Distance, Support Vector Machine (SVM), and k-Nearest Neighbors (kNN) based on statistical FHRV features. Model performance was evaluated using standard evaluation metrics for both training and testing datasets. Among the classifiers, kNN demonstrated the most balanced performance between sensitivity and specificity, while SVM showed limited generalizability due to poor sensitivity for abnormal pH cases.
DOI: 10.1109/ICCE-Taiwan66881.2025.11208140
Other Link: https://ieeexplore.ieee.org/document/11208140