Papers - THI THI ZIN
-
Non-contact Monitoring of Dystocia in Dairy Cows Using Keypoint Detection and Semantic Segmentation Reviewed International journal
T. Murayama, Thi Thi Zin, I. Kobayashi, M. Aikawa
The 2026 IEEE International Conference on Consumer Technology – Pacific (ICCT-Pacific 2026) 2026.3
Authorship:Corresponding author Language:English Publishing type:Research paper (international conference proceedings) Publisher:IEEE
In the dairy industry, labor shortages and the economic losses caused by calving accidents are significant issues. To address these problems, we propose a non-contact monitoring system using 360-degree cameras and deep learning techniques. This study focuses on constructing an automated workflow that detects cows, estimates their poses (standing or lying), and tracks individuals without attaching sensors to the animals. We employed YOLO11 for cow detection and keypoint extraction, and compared three models for pose estimation: Multilayer Perceptron (MLP), Gated Recurrent Unit (GRU), and Semantic Segmentation (Deeplabv3+). The experimental results showed that YOLO11 achieved a high detection accuracy (mAP@0.50: 99.47%) for bounding boxes. For pose estimation, the semantic segmentation approach with a ResNet101 backbone achieved the highest accuracy of 85.1%, outperforming keypoint-based methods. These results demonstrate the potential of the proposed system for basic behavioral monitoring in calving barns.
-
A Study on Supporting Neurocognitive Disorder Assessment for Deaf Individuals Using a Sign Language Recognition System Reviewed International journal
N. Shibahara, Thi Thi Zin, S. Ito, N. Takahashi, N. Takemoto
The 2026 IEEE International Conference on Consumer Technology – Pacific (ICCT-Pacific 2026) 2026.3
Authorship:Corresponding author Language:English Publishing type:Research paper (international conference proceedings) Publisher:IEEE
The Mini Mental State Examination (MMSE) is widely used for screening Neurocognitive Disorder (NCD); however, ensuring diagnostic accuracy for Deaf individuals remains a challenge due to factors such as the potential subjectivity and translation errors introduced by sign language interpreters. To address this issue, this study proposes an automated MMSE scoring system employing Japanese Sign Language (JSL) recognition based on skeletal keypoints. The proposed method utilizes MediaPipe Pose and Hands to extract feature points from examination videos and employs a Long Short-Term Memory (LSTM) model to classify sign language responses. Evaluation results using 5-fold cross-validation on a dataset of Deaf individuals demonstrated a high average classification accuracy of 92.75%. Furthermore, the system successfully performed automated scoring compliant with the MMSE protocol. These results indicate that the proposed system can enable objective cognitive assessment without interpreter intervention, thereby contributing to more accurate diagnoses for Deaf individuals.
-
Signal-based feature analysis of behavioral trajectories for predicting calving time and classifying assistance needs Reviewed International journal
Wai Hnin Eaindrar Mg, Pyke Tin, M. Aikawa, K. Honkawa, Y. Horii, Thi Thi Zin
Computers and Electronics in Agriculture 243 2026.3
Authorship:Last author, Corresponding author Language:English Publishing type:Research paper (scientific journal) Publisher:Elsevier B.V.
Accurately predicting calving time and recognizing when a cow needs help during delivery are essential for effective livestock management. These factors directly influence animal welfare, how labor is distributed on the farm, and overall productivity. Without close monitoring, calving complications can lead to serious health issues or even death for the cattle. Moreover, delayed assistance during difficult births (dystocia) can significantly harm both the cow and the calf. These problems remain challenging due to the subtle and highly variable nature of cattle behavior, especially within large-scale farming environments where continuous manual monitoring is impractical. This research proposes a fully vision-based, non-invasive system that relies solely on cattle trajectory data derived from images to address these challenges. To analyze signal-based behavioral trajectories associated with calving, we applied three signal-based image processing techniques aimed at predicting calving time and identifying individuals likely to require human assistance during parturition. Our system allows for continuous, automated monitoring using four surveillance cameras eliminating the need for wearable sensors or invasive equipment. We employed three analytical approaches such as amplitude analysis, frequency analysis, and power spectral density analysis (PSD) to interpret cattle movement patterns from camera-derived trajectory data. For predicting calving time, our system achieved 100 % accuracy across all methods. Specifically, the amplitude analysis predicted calving within 9 h, the frequency analysis provided predictions within 5 h, and the PSD analysis predicted calving within 6 h. Moreover, in classifying cattle requiring human assistance during parturition, our system achieved accuracy of 60 %, 60 %, and 65 % for the amplitude, frequency, and PSD analyses, respectively. Unlike conventional methods that rely on wearable sensors, manual observation, or AI models requiring extensive training, our prediction system operates without any model training phase, instead directly analyzing motion patterns from trajectory data to generate predictions. This makes our prediction simpler, more interpretable, and highly scalable, offering a practical and robust solution for improving livestock monitoring and timely intervention in modern farming environments. This work paves the way for further development of automated, non-invasive livestock monitoring technologies.
DOI: 10.1016/j.compag.2025.111301
Other Link: https://www.sciencedirect.com/science/article/pii/S0168169925014073
-
Deep Learning-Driven Intrusion Prediction System Using Ground-Plane Homography in Smart City Dynamic Zones Reviewed International journal
Cho Nilar Phyo, Thi Thi Zin, Pyke Tin
IET Smart Cities 8 ( 1 ) 2026.1
Authorship:Corresponding author Language:English Publishing type:Research paper (scientific journal) Publisher:IET Smart Cities
In smart city environments, public safety increasingly depends on intelligent surveillance systems that can be capable of adapting to dynamic and context-dependent access restrictions. Traditional systems often rely on static and predefined boundaries that fail to respond to rapidly changing environments such as construction sites, public gatherings or emergency situations. This paper introduces a novel deep learning-driven framework using ground-plane homography for real-time proactive intrusion prediction within these dynamically restricted zones (DRZs). Our method first employs deep learning to accurately detect and localise physical restriction markers (e.g., traffic cones). We then utilise ground-plane homography estimation to accurately map these markers into two-dimensional ground-plane perspective, precisely defining the spatial boundaries of the DRZ in real-time. After the reactive detection of restriction markers region, intrusion prediction is achieved through sophisticated human trajectory analysis and future path extrapolation. By forecasting a person's path and identifying projected future presence within the dynamic ground-plane zone, the system assists proactive alerts and adaptive security responses before an actual violation. To the best of our knowledge, this is the first system capable of predicting intrusions into areas dynamically demarcated by visual restriction markers. The experimental results on real-world surveillance datasets demonstrate the system's effectiveness in identifying the presence of humans in DRZ, validating its potential for deployment in smart cities and critical infrastructure.
DOI: 10.1049/smc2.70024
-
GSGOA: Grouped and Scaled Gannet Optimization Algorithm Reviewed International coauthorship
Zhi Li, Shu-Chuan Chu, Thi Thi Zin, Junzo Watada, Jeng-Shyang Pan
Smart Innovation Systems and Technologies 466 SIST 1 - 12 2026.1
Language:English Publishing type:Research paper (international conference proceedings) Publisher:Smart Innovation Systems and Technologies
Swarm intelligence algorithms exhibit remarkable potential in addressing complex optimization problems. Nevertheless, numerous existing approaches, including the Gannet Optimization Algorithm (GOA), encounter difficulties like premature convergence and restricted exploitation capacity during later iterative phases. This study presents a refined variant termed GSGOA, integrating a global best-guided mechanism, a Gaussian - based adaptive grouping strategy, and a Laplace - distributed scaling factor. These enhancements target strengthening the equilibrium between exploration and exploitation, alongside boosting convergence stability. The proposed algorithm undergoes assessment using the CEC 2017 benchmark suite, and experimental findings reveal that GSGOA consistently surpasses classical algorithms such as GOA, SCA, BOA, and WOA in solution precision and robustness.
-
3D Camera-Based Estimation of Cattle Body Weight Reviewed International journal
S. Araki, K. Shiiya, Thi Thi Zin, I. Kobayashi
IEEE Conference Proceedings: 2025 IEEE 14th Global Conference on Consumer Electronics (GCCE) 1196 - 1197 2025.12
Language:English Publishing type:Research paper (international conference proceedings) Publisher:IEEE
Traditional methods for measuring cattle weight require special equipment and often involve physical contact with the animals, increasing the risk of accidents. As the workload for dairy farmers grows due to a decreasing workforce, there is a strong need for safer and more efficient solutions. In this study, we propose a contactless method to estimate cattle weight using a depth camera. This study differentiates itself from other studies by placing the camera above the cow, making it more versatile. We extracted depth images and calculated key body measurements: height, body length, and belly width. Based on these values, we created a regression formula to estimate weight. Our results show that it is possible to estimate cattle weight roughly using only the values obtained from depth images. This method reduces the risk of injuries during measurement and offers a more efficient way to manage cattle health and nutrition.
-
A Conceptual Framework for Neonatal Motor Activity Monitoring Using Digital Twin Technology and Computer Vision: A Preliminary Study Reviewed International journal
R. Nakashima, H. Matsumoto, Thi Thi Zin, Y. Kodama
IEEE Conference Proceedings: 2025 IEEE 14th Global Conference on Consumer Electronics (GCCE) 1320 - 1321 2025.12
Authorship:Corresponding author Language:English Publishing type:Research paper (international conference proceedings) Publisher:IEEE
Continuous non-contact monitoring of neonatal motor activity in the neonatal intensive care unit (NICU) is crucial for early detection of neurological disorders and for guiding timely clinical interventions. We introduce an infrared-driven skeleton-estimation prototype designed for real-time operation that generates a live virtual "digital twin" of the infant’s posture to support clinician assessment. A deep-learning pose model was fine-tuned on a bespoke infrared key-point dataset, and three motion-quantification filters were evaluated: raw differencing (Method A), center-aligned suppression (Method B), and a newly proposed skeleton template-matching filter (Method C). Tests on a life-sized neonatal mannequin confirmed centimetric joint-localization accuracy, reliable detection of 50-pixel hand displacements, and reduction of simulated camera-shake artifacts to within five pixels. Building on these results, a follow-up evaluation on pre-term neonates showed that Method C suppressed static key-point noise by 78 % while preserving physiological motion. This combined mannequin and in-vivo evidence demonstrates the clinical feasibility of our infrared digital-twin framework and establishes a foundation for automated assessment of pre-term motor development.
-
Enhanced Multi-Person Tracking Method Based on ByteTrack Architecture Reviewed International journal
Cho Nilar Phyo, Thi Thi Zin, Pyke Tin
IEEE Conference Proceedings: 2025 IEEE 14th Global Conference on Consumer Electronics (GCCE) 1034 - 1035 2025.12
Language:English Publishing type:Research paper (international conference proceedings) Publisher:IEEE
This paper presents an enhanced tracking method built upon the ByteTrack architecture for efficient and robust multi-person tracking in challenging environments. The proposed system enhances ByteTrack with improved track management strategies, especially for solving the ID increasing issue of ByteTrack. Experimental results on multi-person tracking dataset demonstrate superior performance compared to baseline approach. This findings suggest that the proposed enhancements make ByteTrack more suitable for crowded or dynamic scenes where precise person tracking is essential.
-
An End-to-End Computer Vision Pipeline for Cow Ear Tag Number Recognition Using YOLOv11 and a Hybrid Efficientnet-NRTR Model Reviewed International journal
San Chain Tun, Pyke Tin, M. Aikawa, I. Kobayashi, Thi Thi Zin
IEEE Conference Proceedings: 2025 9th International Conference on Information Technology (InCIT) 860 - 866 2025.12
Authorship:Last author, Corresponding author Language:English Publishing type:Research paper (international conference proceedings) Publisher:IEEE
This study introduces a robust, real-time system for automatically identifying individual cows by reading the four-digit numbers on their ear tags. This system is a key part of precision livestock farming and operates through a multi-stage pipeline designed for accuracy and practical use in a real farm environment. The system's pipeline first uses a YOLOv11 model to detect and segment cow heads and ear tags from video feeds. A custom tracking algorithm then ensures that each cow's identity is maintained even with temporary occlusions or missed detections. This tracker uses a combination of techniques, including Intersection over Union (IoU) and frame-holding logic, to provide persistent identity assignments. Finally, an (Non-Recurrence Sequence-to-Sequence Model for Scene Text Recognition) NRTR-based OCR model with EfficientNet backbones reads the numbers from the cropped ear tag images. This model is trained to recognize digits from 0 to 9 and uses an “x” for any unreadable characters, ensuring reliable number sequence recognition. The system's performance was evaluated on a real-world dataset from a dairy farm. It achieved a high detection and tracking accuracy of 96.18 %. The OCR component also demonstrated strong results, with the most advanced EfficientNet backbone (B7) achieving an impressive 95.38% accuracy. These findings confirm the system's high performance and reliability, offering a scalable and viable solution for automated cows monitoring in operational farm settings.
-
Deep Sequential Gait Feature Learning for Long-Term Person Re-Identification in Real-World Environments Reviewed International journal
Cho Nilar Phyo, Thi Thi Zin, Pyke Tin
IEEE Conference Proceedings: 2025 9th International Conference on Information Technology (InCIT) 838 - 844 2025.12
Language:English Publishing type:Research paper (international conference proceedings) Publisher:IEEE
This paper presents a novel gait-based framework for long-term person re-identification in real-world environments. Unlike appearance-based methods, which are often sensitive to illumination, clothing changes, and occlusion, our approach leverages gait dynamics captured via dense optical flow and deep feature learning. We integrate ResNet101 for spatial feature extraction and an LSTM network for temporal sequence modeling, enabling robust representation of human walking patterns across extended time periods. The experimental results on gait datasets demonstrate that the proposed system achieves good recognition in term of accuracy, mean Average Precision (mAP) and recall, and stability under challenging real-world conditions, highlighting its potential for surveillance and security applications.
-
AI-powered visual E-monitoring system for cattle health and wealth Reviewed International journal
Aung Si Thu Moe, Pyke Tin, M. Aikawa, I. Kobayashi, Thi Thi Zin
Smart Agricultural Technology 12 2025.12
Authorship:Last author, Corresponding author Language:English Publishing type:Research paper (scientific journal) Publisher:Elsevier B.V.
The livestock industry is experiencing a major transformation through the integration of artificial intelligence (AI) and advanced visual e-monitoring technologies. This study presents an AI-powered cattle health monitoring system that combines real-time computer vision, edge computing, and mobile applications to enhance animal welfare and farm productivity. The system employs a multi-camera setup, comprising RGB, RGB-D, and ToF depth cameras, strategically deployed across four functional zones of a cattle barn: the milking parlor, return lane, feeding area, and resting space. Through integrated deep learning algorithms, the platform performs key health-related tasks, including ear-tag, body-based, and face-based cattle identification, body condition scoring (BCS), lameness detection, feeding time estimation, and real-time localization. A farm-side desktop application processes live video streams from 22 cameras using multiprocessing, maintaining an average latency of 0.62 s per frame per camera. Captured data are stored in a structured MySQL database and accessed via a RESTful API by a user-side mobile application developed using Flutter and Clean Architecture. Experimental evaluation under continuous 24-hour operation demonstrated the system's stability and effectiveness in delivering actionable insights. Cattle identification achieved high accuracies: ear-tag 94.00 %, face-based 93.66 %, body-based 92.80 %, and body-color point cloud 99.55 %. The BCS prediction and lameness detection modules achieved average accuracies of 86.21 % and 88.88 %, respectively. Feedback from veterinarians and farm personnel during pilot testing confirmed its usability and practical relevance. While current limitations include computational demands and the need for improved model robustness, the proposed system establishes a scalable, non-invasive framework for intelligent livestock monitoring. It aligns with broader Green and Digital Transformation (GX and DX) initiatives toward sustainable smart farming practices.
DOI: 10.1016/j.atech.2025.101300
Other Link: https://www.sciencedirect.com/science/article/pii/S2772375525005313
-
Generating Accurate Activity Patterns for Cattle Farm Management Using MCMC Simulation of Multiple-Sensor Data System. Reviewed International journal
Y. Hashimoto, Thi Thi Zin, Pyke Tin, I. Kobayashi, H. Hama
Sensors (Basel, Switzerland) 25 ( 21 ) 2025.11
Authorship:Corresponding author Language:English Publishing type:Research paper (scientific journal) Publisher:Multidisciplinary Digital Publishing Institute (MDPI)
This paper presents a novel Markov Chain Monte Carlo (MCMC) simulation model for analyzing multi-sensor data to enhance cattle farm management. As Precision Livestock Farming (PLF) systems become more widespread, leveraging data from technologies like 3D acceleration, pneumatic, and proximity sensors is crucial for deriving actionable insights into animal behavior. Our research addresses this need by demonstrating how MCMC can be used to accurately model and predict complex cattle activity patterns. We investigate the direct impact of these insights on optimizing key farm management areas, including feed allocation, early disease detection, and labor scheduling. Using a combination of controlled monthly experiments and the analysis of uncontrolled, real-world data, we validate our proposed approach. The results confirm that our MCMC simulation effectively processes diverse sensor inputs to generate reliable and detailed behavioral patterns. We find that this data-driven methodology provides significant advantages for developing informed management strategies, leading to improvements in the overall efficiency, productivity, and profitability of cattle operations. This work underscores the potential of using advanced statistical models like MCMC to transform multi-sensor data into tangible improvements for modern agriculture.
DOI: 10.3390/s25216781
-
A study on action recognition for the elderly using depth camera Reviewed International journal
R. Nakashima, Thi Thi Zin, H. Tamura, S. Watanabe, E. Chosa
Institute of Electrical and Electronics Engineers Inc., Conference Proceedings (ICCE-Taiwan 2025) 119 - 120 2025.10
Authorship:Corresponding author Language:English Publishing type:Research paper (international conference proceedings) Publisher:ICCE Taiwan 2025 12th IEEE International Conference on Consumer Electronics Taiwan Generative AI in Innovative Consumer Technology Proceedings
In this study, a depth camera-based system is proposed to achieve non-contact, privacy-preserving action recognition using human skeleton recognition. Specifically, human regions are first extracted using bounding box (BB) detection, followed by action recognition based on Keypoint-based pose estimation. The estimated Keypoints capture detailed joint positions, and their structural relationships are modeled with a Graph Convolutional Network (GCN). Furthermore, a Transformer is employed to capture the temporal features of the skeletal data. This Keypoint-centric pipeline differentiates our approach from conventional, silhouette-level methods and significantly enhances the granularity of action recognition.
DOI: 10.1109/ICCE-Taiwan66881.2025.11208107
Other Link: https://ieeexplore.ieee.org/document/11208107
-
Research on Feature Extraction for Prediction of Dystocia in Cows Using Image Processing Technology Reviewed International journal
T. Murayama, Thi Thi Zin, I. Kobayashi, M. Aikawa
Institute of Electrical and Electronics Engineers Inc., Conference Proceedings (ICCE-Taiwan 2025) 123 - 124 2025.10
Authorship:Corresponding author Language:English Publishing type:Research paper (international conference proceedings) Publisher:Icce Taiwan 2025 12th IEEE International Conference on Consumer Electronics Taiwan Generative AI in Innovative Consumer Technology Proceedings
In dairy farming, an aging operator population and a shortage of successors have led to a decline in the number of farms rearing milking cows, while the number of milking cows per farm is increasing. Under these circumstances, effective calving management has become critical. Calving fatalities cause significant economic losses to dairy operations. Therefore, there is a strong demand for technology that can detect early signs of calving and reduce accidents. In this study, a ceiling-mounted fisheye camera was used to record cow behavior before calving. Quantitative features-including posture variation, tail-raising behavior, and movement distance-were extracted to develop predictive indicators for calving onset and dystocia detection. In a study involving four cows over approximately 20 hours of observation, the segmentation model achieved a mAP@50 of 99.3%, keypoints detection reached 93.6%, and posture classification attained accuracies of 71.7% using a multilayer perceptron (MLP) and 68.9% using a gated recurrent unit (GRU). These results enabled the detection of abnormal calving events up to four hours in advance.
DOI: 10.1109/ICCE-Taiwan66881.2025.11208029
Other Link: https://ieeexplore.ieee.org/document/11208029
-
Optimizing Network Message Regulations Using AI-Enhanced Dynamic Programming Methods Reviewed International journal
Thi Thi Zin, Tunn Cho Lwin, H. Hama, Pyke Tin
Institute of Electrical and Electronics Engineers Inc., Conference Proceedings (ICCE-Taiwan 2025) 121 - 122 2025.10
Authorship:Lead author, Corresponding author Language:English Publishing type:Research paper (international conference proceedings) Publisher:Icce Taiwan 2025 12th IEEE International Conference on Consumer Electronics Taiwan Generative AI in Innovative Consumer Technology Proceedings
Network message transmission efficiency faces increasing challenges in multi-server systems due to complex traffic patterns and resource allocation demands. This paper presents an AI-enhanced dynamic programming approach for optimizing message flow regulations. By formulating the problem as a Markov Decision Process (MDP) and integrating reinforcement learning techniques, we develop an adaptive framework for network message regulation. Experimental results show our approach achieves 25% reduction in queue length and 30% improvement in resource utilization compared to conventional methods.
DOI: 10.1109/ICCE-Taiwan66881.2025.11208087
Other Link: https://ieeexplore.ieee.org/document/11208087
-
Machine Learning-Based Classification of Umbilical Cord Blood Gas Using Fetal Heart Rate Variability Reviewed International journal
Tunn Cho Lwin, Thi Thi Zin, Pyke Tin, E. Kino, T. Ikenoue
Institute of Electrical and Electronics Engineers Inc., Conference Proceedings (ICCE-Taiwan 2025) 117 - 118 2025.10
Authorship:Corresponding author Language:English Publishing type:Research paper (international conference proceedings) Publisher:Icce Taiwan 2025 12th IEEE International Conference on Consumer Electronics Taiwan Generative AI in Innovative Consumer Technology Proceedings
Fetal heart rate variability (FHRV) is a key indicator of fetal well-being and has potential in predicting umbilical cord blood gas, an essential biomarker for fetal health assessment. Machine learning techniques can enhance fetal pH classification using FHRV features. This study aims to develop a machine learning-based classification model for fetal pH levels, leveraging FHRV data to support early risk detection during childbirth. To achieve this, we classify fetal pH into two categories using Mahalanobis Distance, Support Vector Machine (SVM), and k-Nearest Neighbors (kNN) based on statistical FHRV features. Model performance was evaluated using standard evaluation metrics for both training and testing datasets. Among the classifiers, kNN demonstrated the most balanced performance between sensitivity and specificity, while SVM showed limited generalizability due to poor sensitivity for abnormal pH cases.
DOI: 10.1109/ICCE-Taiwan66881.2025.11208140
Other Link: https://ieeexplore.ieee.org/document/11208140
-
Behavior Estimation of Calf Groups Using RGB Cameras and Deep Learning Reviewed International journal
D. Nishimoto, Thi Thi Zin., M. Aikawa
Institute of Electrical and Electronics Engineers Inc., Conference Proceedings (ICCE-Taiwan 2025) 115 - 116 2025.10
Authorship:Corresponding author Language:English Publishing type:Research paper (international conference proceedings) Publisher:Icce Taiwan 2025 12th IEEE International Conference on Consumer Electronics Taiwan Generative AI in Innovative Consumer Technology Proceedings
This paper presents a real-time behavior estimation system for calf groups using RGB cameras and deep learning. The system employs YOLO for detection, Weighted Intersection over Union (IoU) based tracking for consistent IDs, and Segment Anything Model2 (SAM2) with EfficientNetv2-L for individual identification. It classifies postures (sitting, standing) and intake behaviors (drinking milk/water, eating) for comprehensive health monitoring. Experiments on 16 calves achieved 91.33% Multi object tracking accuracy (MOTA), approximately 80% accuracy for posture classification, and 50-70% accuracy for intake behaviors, with real-time processing. The system effectively reduces labor burdens and supports scalable livestock management.
DOI: 10.1109/ICCE-Taiwan66881.2025.11207893
Other Link: https://ieeexplore.ieee.org/document/11207893
-
A Study on Machine Learning Approaches for Predicting Fetal pH Level Using Fetal Heart Rate Variability Reviewed International journal
Cho Nilar Phyo, Tunn Cho Lwin, Pyae Phyo Kyaw, E. Kino, T. Ikenoue, Pyke Tin, Thi Thi Zin
ICIC Express Letters, Part B: Applications 16 ( 8 ) 879 - 886 2025.8
Authorship:Last author, Corresponding author Language:English Publishing type:Research paper (scientific journal) Publisher:Icic Express Letters Part B Applications
Fetal well-being monitoring system is essential for ensuring healthy labor outcomes. One of the non-invasive methods for assessing fetal health during labor and delivery is by analyzing fetal heart rate variability (FHRV), which can be used to predict fetal pH levels. This study compares different machine learning approaches for predicting fetal pH levels based on FHRV data collected during labor and delivery. The dataset used in this study includes FHRV signals together with corresponding umbilical cord blood gas measurements such as pH, which are used to train and evaluate the models. This study applies several machine learning algorithms and evaluates their performance using key metrics such as sensitivity, specificity, precision, F1-score, and accuracy. These metrics help to determine which model is the most accurate predicting fetal pH levels based on FHRV characteristics. The results reveal that the support vector machine (SVM) model outperforms with the accuracy of 81.67%, better than the other algorithms in predicting fetal pH levels. The findings of this study aim to contribute to the development of more reliable and accurate prediction models for assessing fetal well-being during labor, enhanced clinical decision-making, allowing for timely interventions and improved healthy labor outcomes for both the mother and the baby.
-
Machine Learning-Based Prediction of Cattle Body Condition Score using 3D Point Cloud Surface Features Reviewed International journal
Pyae Phyo Kyaw, Thi Thi Zin, Pyke Tin, M. Aikawa, I. Kobayashi
Proceedings of SPIE the International Society for Optical Engineering 13701 2025.7
Authorship:Corresponding author Language:English Publishing type:Research paper (international conference proceedings) Publisher:Proceedings of SPIE the International Society for Optical Engineering
Body Condition Score (BCS) of dairy cattle is a crucial indicator of their health, productivity, and reproductive performance throughout the production cycle. Recent advancements in computer vision techniques have led to the development of automated BCS prediction systems. This paper proposes a BCS prediction system that leverages 3D point cloud surface features to enhance accuracy and reliability. Depth images are captured from a top-view perspective and processed using a hybrid depth image detection model to extract the cattle’s back surface region. The extracted depth data is converted into point cloud data, from which various surface features are analyzed, including normal vectors, curvature, point density, and surface shape characteristics (planarity, linearity, and sphericity). Additionally, Fast Point Feature Histograms (FPFH), triangle mesh area, and convex hull area are extracted and evaluated using three optimized machine learning models: Random Forest (RF), K-Nearest Neighbors (KNN), and Gradient Boosting (GB). Model performance is assessed using different tolerance levels and error metrics, including Mean Absolute Error (MAE) and Mean Absolute Percentage Error (MAPE). Among the models, Random Forest demonstrates the highest performance, achieving accuracy rates of 51.36%, 86.21%, and 97.83% at 0, 0.25, and 0.5 tolerance levels, respectively, with an MAE of 0.161 and MAPE of 5.08%. This approach enhances the precision of BCS estimation, offering a more reliable and automated solution for dairy cattle monitoring and health management.
DOI: 10.1117/12.3070481
-
Minimizing Resource Usage for Real-Time Network Camera Tracking of Black Cows Reviewed International journal
Aung Si Thu Moe, Thi Thi Zin, Pyke Tin, M. Aikawa, I. Kobayashi
Proceedings of SPIE the International Society for Optical Engineering 13701 2025.7
Authorship:Corresponding author Language:English Publishing type:Research paper (international conference proceedings) Publisher:Proceedings of SPIE the International Society for Optical Engineering
Livestock plays a crucial role in the farming industry to meet consumer demand. A livestock monitoring system helps track animal health while reducing labor requirements. Most livestock farms are small, family-owned operations. This study proposes a real-time black cow detection and tracking system using network cameras in memory and disk constrained environments. We employ the Detectron2 Mask R-CNN ResNeXt-101 model for black cow region detection and the ByteTrack algorithm for tracking. ByteTrack tracks multiple objects by associating each detection box. Unlike other deep learning tracking algorithms that use multiple features such as texture, color, shape, and size. ByteTrack effectively reduces tracking ID errors and ID switches. Detecting and tracking black cows in real-time is challenging due to their uniform color and similar sizes. To optimize performance on low-specification machines, we apply ONNX (Open Neural Network Exchange) to the Detectron2 detection model for optimization and quantization. The system processes input images from network cameras, enhances color during preprocessing, and detects and tracks black cows efficiently. Our system achieves 95.97% mAP@0.75 detection accuracy and 97.16 % in daytime video and 94.83 % in nighttime accuracy of tracking are effectively tracks individual black cows, minimizing duplicate IDs and improving tracking after missed detections or occlusions. The system is designed to operate on machines with minimal hardware requirements.
DOI: 10.1117/12.3070347