Presentations -
-
Advancing Neonatal Monitoring Using Heart Rate Variability with Machine Learning Models International conference
Tunn Cho Lwin, Thi Thi Zin, Pyke Tin, E. Kino and T. Ikenoue
The Seventh International Conference on Smart Vehicular Technology, Transportation, Communication and Applications (VTCA 2025) (Fuzhou, Fujian, China) 2025.11.22 Technically sponsored by Southwest Jiaotong University and Technology and Nanchang Institute of Technology
Event date: 2025.11.21 - 2025.11.23
Language:English Presentation type:Oral presentation (general)
Venue:Fuzhou, Fujian, China Country:China
Accurate assessment of neonatal respiratory status is critical for early intervention and improved clinical outcomes. Umbilical cord blood pressure of carbon dioxide (PCO2) is a key marker of respiratory efficiency, but its measurement requires invasive sampling. This study proposes a non-invasive, machine learning–based framework to predict abnormal PCO2 levels using fetal heart rate variability (FHRV) features. Seven HRV features were initially extracted, and Principal Component Analysis identified M, S, and entropy as the most informative for classification. Patients were divided into normal (G2) and abnormal (G1) groups based on a PCO2 threshold of 35 mmHg. To address class imbalance, oversampling was applied to the training dataset. Classification experiments with SVM (linear and Gaussian) and k-nearest neighbor (kNN) classifiers demonstrated that oversampling improved sensitivity for the minority abnormal group while maintaining high precision for the majority normal group. On the testing dataset, kNN achieved the most balanced performance, with 85% precision and 83% recall for abnormal cases. These results highlight the potential of combining HRV analysis with ma-chine learning to provide continuous, non-invasive, and real-time monitoring of neonatal respiratory status, offering a promising tool to guide clinical decision-making and reduce dependence on invasive procedures.
-
Digital Cattle Twins: Revolutionizing Calving Management Through Markovian Prediction Systems International conference
Thi Thi Zin, Tunn Cho Lwin, Aung Si Thu Moe, Pyae Phyo Kyaw, M. Aikawa and Pyke Tin
The Seventh International Conference on Smart Vehicular Technology, Transportation, Communication and Applications (VTCA 2025) (Fuzhou, Fujian, China) 2025.11.22 Technically sponsored by Southwest Jiaotong University and Technology and Nanchang Institute of Technology
Event date: 2025.11.21 - 2025.11.23
Language:English Presentation type:Oral presentation (general)
Venue:Fuzhou, Fujian, China Country:China
The integration of digital twin technology with livestock management intro-duces new possibilities in precision livestock farming. Our research proposes the Digital Cattle Twin (DCT) system, a transformative approach to managing cattle calving during the critical periparturient period. This system merges Markovian modeling with real-time visual monitoring to enhance predictive accuracy in calving management. By modeling calving as a sequence of interconnected states within a Markov chain, the DCT predicts progression from early labor to postpartum recovery with high precision. Real-time prob-ability calculations enable early detection of complications and optimal intervention timing. The system integrates diverse data streams, including vaginal temperature sensors for pre-calving temperature drops, AI-based video analysis for behavioral and movement changes, heart rate variability for stress detection, and spatial tracking for calving readiness. A predictive analytics engine processes this multimodal data, achieving high accuracy in detecting risks. The DCT’s adaptive learning architecture refines predictions using both individual and herd-level patterns, enabling a proactive rather than reactive management approach. Beyond calving, this framework illustrates how mathematical modeling and digital twins can redefine livestock management, opening pathways for broader applications in animal health, welfare, and production optimization.
-
AI and Image Processing for Smart Ecosystems: Enabling Connected Futures Across Transportation, Agriculture, and Healthcare Invited International conference
Thi Thi Zin
The Seventh International Conference on Smart Vehicular Technology, Transportation, Communication and Applications (VTCA 2025) (Fuzhou, Fujian, China) 2025.11.22 Technically sponsored by Southwest Jiaotong University and Technology and Nanchang Institute of Technology
Event date: 2025.11.21 - 2025.11.23
Language:English Presentation type:Oral presentation (keynote)
Venue:Fuzhou, Fujian, China Country:China
The convergence of Artificial Intelligence (AI) and cutting-edge Image Processing is ushering in a transformative era for smart ecosystems, delivering unparalleled precision and efficiency across diverse sectors. This keynote explores the evolution and synergistic integration of AI technologies, tracing their impact from foundational applications in Intelligent Transportation Systems (ITS) to advanced solutions in precision agriculture and health monitoring.
Initially, AI’s role in ITS revolutionized traffic safety and management through automated detection of road signs, pedestrians, and environmental cues. Building on this legacy, we delve into how real-time AI-driven image analytics are now empowering smart dairy farming and comprehensive livestock health monitoring. These applications facilitate predictive health management, optimize operational workflows, and foster sustainable, intelligent farm ecosystems.
Furthermore, this presentation highlights crucial interdisciplinary collaborations in healthcare. We examine AI-enabled monitoring solutions tailored for elderly care and infant health, which leverage sensor fusion and intelligent data interpretation. These systems are pivotal in enhancing patient safety, improving quality of life, and enabling personalized care delivery.
Ultimately, this keynote underscores the profound potential of AI-driven innovations to create interconnected, intelligent environments. By seamlessly bridging smart transportation, precision agriculture, and human healthcare, we present a holistic and actionable vision for the development of future smart communities.Other Link: https://vtca2025.udd.ink/page/keynoteSpeech.html
-
An End-to-End Computer Vision Pipeline for Cow Ear Tag Number Recognition Using YOLOv11 and a Hybrid EfficientNet-NRTR Model International conference
San Chain Tun, Pyke Tin, M. Aikawa, I. Kobayashi and Thi Thi Zin
The 9th International Conference on Information Technology (InCIT2025) (Phuket, Thailand) 2025.11.13 IEEE Thailand Section (IEEE Computer Society Thailand Chapter)
Event date: 2025.11.12 - 2025.11.14
Language:English Presentation type:Oral presentation (general)
Venue:Phuket, Thailand Country:Thailand
Automated identification of individual livestock is a critical component of precision livestock farming. This study presents a robust, real-time system for recognizing four-digit ear tag numbers on cows using a multi-stage pipeline. The pipeline consists of ROI extraction, YOLOv11-based detection and instance segmentation of cow heads and ear tags, a customized tracking algorithm for persistent identity assignment, and an NRTR-based OCR model with EfficientNet backbones for number recognition. The customized tracker leverages Intersection over Union (IoU), frame-holding, and bounding box position logic to handle missed detections and ensure accurate tracking. The OCR model predicts digits 0-9 and uses "x" for unknown characters, providing reliable sequence recognition from cropped ear tag images. The system was evaluated on a real-world dataset collected over five days on a dairy farm. The overall detection and tracking accuracy achieved 96.18%, while OCR accuracy for EfficientNet backbones B4 to B7 reached 91.54%, 93.85%, 93.08%, and 95.38%, respectively. Results demonstrate high accuracy and robustness across all stages, confirming the practical viability of the approach. This integrated system offers a scalable solution for automated cattle identification and monitoring in operational farm environments.
-
Deep Sequential Gait Feature Learning for Long-Term Person Re-Identification in Real-World Environments International conference
Cho Nilar Phyo, Thi Thi Zin and Pyke Tin
The 9th International Conference on Information Technology (InCIT2025) (Phuket, Thailand and Online) 2025.11.13 IEEE Thailand Section (IEEE Computer Society Thailand Chapter)
Event date: 2025.11.12 - 2025.11.14
Language:English Presentation type:Oral presentation (general)
Venue:Phuket, Thailand and Online Country:Thailand
This paper presents a novel gait-based framework for long-term person re-identification in real-world environments. Unlike appearance-based methods, which are often sensitive to illumination, clothing changes, and occlusion, our approach leverages gait dynamics captured via dense optical flow and deep feature learning. We integrate ResNet101 for spatial feature extraction and an LSTM network for temporal sequence modeling, enabling robust representation of human walking patterns across extended time periods. The experimental results on gait datasets demonstrate that the proposed system achieves good recognition in term of accuracy, mean Average Precision (mAP) and recall, and stability under challenging real-world conditions, highlighting its potential for surveillance and security applications.
-
Estimation of Cows Weight using a Depth Camera International conference
S. Araki, K. Shiiya, Thi Thi Zin, I. Kobayashi
2025 IEEE 14th Global Conference on Consumer Electronics (GCCE2025) (Osaka, Japan) 2025.9.25 IEEE Consumer Technology Society
Event date: 2025.9.23 - 2025.9.26
Language:English Presentation type:Oral presentation (general)
Venue:Osaka, Japan Country:Japan
Traditional methods for measuring cattle weight require special equipment and often involve physical contact with the animals, increasing the risk of accidents. As the workload for dairy farmers grows due to a decreasing workforce, there is a strong need for safer and more efficient solutions. In this study, we propose a contactless method to estimate cattle weight using a depth camera. This study differentiates itself from other studies by placing the camera above the cow, making it more versatile. We extracted depth images and calculated key body measurements: height, body length, and belly width. Based on these values, we created a
regression formula to estimate weight. Our results show that it is possible to estimate cattle weight roughly using only the values obtained from depth images. This method reduces the risk of injuries during measurement and offers a more efficient way to manage cattle health and nutrition. -
A Conceptual Framework for Neonatal Motor Activity Monitoring Using Digital Twin Technology and Computer Vision: A Preliminary Study International conference
R. Nakashima, H. Matsumoto, Thi Thi Zin, Y. Kodama
2025 IEEE 14th Global Conference on Consumer Electronics (GCCE2025) (Osaka, Japan) 2025.9.26 IEEE Consumer Technology Society
Event date: 2025.9.23 - 2025.9.26
Language:English Presentation type:Oral presentation (general)
Venue:Osaka, Japan Country:Japan
Continuous non‑contact monitoring of neonatal motor activity in the neonatal intensive care unit (NICU) is crucial for early detection of neurological disorders and for guiding timely clinical interventions. We introduce an infrared‑driven skeleton‑estimation prototype designed for real‑time operation that generates a live virtual "digital twin" of the infant’s posture to support clinician assessment. A deep‑learning pose model was fine‑tuned on a bespoke infrared key‑point dataset, and three motion‑quantification filters were evaluated: raw differencing (Method A), center‑aligned suppression (Method B), and a newly proposed skeleton template‑matching filter (Method C). Tests on a life‑sized neonatal mannequin confirmed centimetric joint‑localization accuracy, reliable detection of 50‑pixel hand displacements, and reduction of simulated camera‑shake artifacts to within five pixels. Building on these results, a follow‑up evaluation on pre‑term neonates showed that Method C suppressed static key‑point noise by 78 % while preserving physiological motion. This combined mannequin and in‑vivo evidence demonstrates the clinical feasibility of our infrared digital‑twin framework and establishes a foundation for automated assessment of pre‑term motor development.
-
Enhanced Multi-Person Tracking Method Based on ByteTrack Architecture International conference
Cho Nilar Phyo, Thi Thi Zin and Pyke Tin
2025 IEEE 14th Global Conference on Consumer Electronics (GCCE2025) (Osaka, Japan) 2025.9.25 IEEE Consumer Technology Society
Event date: 2025.9.23 - 2025.9.26
Language:English Presentation type:Oral presentation (general)
Venue:Osaka, Japan Country:Japan
This paper presents an enhanced tracking method built upon the ByteTrack architecture for efficient and robust multi-person tracking in challenging environments. The proposed system enhances ByteTrack with improved track management strategies, especially for solving the ID increasing issue of ByteTrack. Experimental results on multi-person tracking dataset demonstrate superior performance compared to baseline approach. This findings suggest that the proposed enhancements make ByteTrack more suitable for crowded or dynamic scenes where precise person tracking is essential.
-
Smart Health Monitoring of Dairy Cattle Using AI and Image Technologies Invited
Thi Thi Zin
2025.9.5
Event date: 2025.9.3 - 2025.9.6
Language:Japanese Presentation type:Oral presentation (invited, special)
Country:Japan
-
Tunn Cho Lwin, Thi Thi Zin, Pyke Tin, E. Kino, and T. Ikenoue
画像工学研究会 (IE) (北海道, オンライン) 2025.9.4 電子情報通信学会
Event date: 2025.9.3 - 2025.9.4
Language:English Presentation type:Oral presentation (general)
Venue:北海道, オンライン Country:Japan
Umbilical cord blood gas analysis from fetal heart rate signals, particularly partial pressure of carbon dioxide, is essential for assessing fetal acid-based status and detecting respiratory acidosis at birth. This study proposes a machine learning-based classification framework for predicting fetal carbon dioxide levels as either normal or abnormal using features derived from fetal heart rate variability. Preprocessing included correlation-based segment selection to manage inconsistencies in recording lengths, interpolation to address missing values, and Laplacian features for machine learning classifications. Among the tested window durations of 10, 30, and 60 minutes, the 30-minute segment showed the strongest correlation with carbon dioxide levels and was selected for model training. Three supervised learning models were evaluated: Support Vector Machine with linear and Gaussian kernels, and k-Nearest Neighbors. The results show that k-Nearest Neighbors and Gaussian kernel SVM achieved the best classification performance in detecting abnormal carbon dioxide cases. These findings suggest that supervised learning combined with heart rate variability analysis can offer a promising and non-invasive way to support early detection of fetal health risks during delivery.
-
Research on cattle feeding detection using image processing
T. ISHIKAWA, M. AIKAWA, I. KOBAYASHI and THI THI ZIN
2025.9.4
Event date: 2025.9.3 - 2025.9.4
Language:Japanese Presentation type:Oral presentation (general)
Country:Japan
-
Depth Camera-Based Analysis of Elderly Behavior for Risk Detection Using Skeletal Data International conference
R. Nakashima, Thi Thi Zin, T. Hiroki and S. Watanabe
The 19th International Conference on Innovative Computing, Information and Control (ICICIC2025) (Kitakyusyu, Japan) 2025.8.27 ICIC International
Event date: 2025.8.26 - 2025.8.29
Language:English Presentation type:Oral presentation (general)
Venue:Kitakyusyu, Japan Country:Japan
We present a non-contact, privacy-preserving monitoring system that estimates behavioral risk in elderly-care rooms using depth cameras. First, each video frame is processed to detect individuals and extract 13 skeletal keypoints via a YOLO-based person detector and pose estimator. These keypoints are fed into a two-stage model comprising a graph convolutional network (GCN) and a Transformer encoder, which capture spatial and temporal movement patterns. To contextualize actions, we apply semantic segmentation to identify key regions such as beds and chairs. A rule-based framework then integrates action predictions with spatial overlap between keypoints and environment masks to assign one of three risk levels: Safe, Attention, or Danger. For robustness, we apply temporal smoothing and fuse outputs from two depth cameras. Finally, we design and implement a lightweight graphical user interface (GUI) to visualize risk levels and issue real-time alerts. Experimental results show an overall accuracy of 89.8 % and a hazard-detection accuracy of 74.3 %.
-
Real-Time Cattle Identification Based on Ear Tag Recognition with ID Reuse and Template Matching International conference
Y. Shimizu, Thi Thi Zin, M. Aikawa and I. Kobayashi
The 19th International Conference on Innovative Computing, Information and Control (ICICIC2025) (Kitakyusyu, Japan) 2025.8.27 ICIC International
Event date: 2025.8.26 - 2025.8.29
Language:English Presentation type:Oral presentation (general)
Venue:Kitakyusyu, Japan Country:Japan
This study presents a robust and efficient method for real-time cattle identification using ear tags. To improve tracking accuracy, spatial position data was utilized in identifying ear tag regions. In addition, processing speed was enhanced by reusing previously identified results and applying template matching. Experiments were conducted using video data captured in a real milking environment. The results showed that reusing identification results significantly reduced processing time while maintaining accuracy. In contrast, template matching was less effective under the tested conditions. Among all evaluated settings, the method using only ID reuse achieved the best balance between speed and accuracy. Future directions include improving template matching for variations in color and shape, combining with other identification techniques for greater robustness, and applying the method to behavior monitoring and health anomaly detection.
-
Robust Cow Detection and Tracking in Livestock Environments Using Yolov8 and DeepSort with Depth Camera Fusion International conference
San Chain Tun, Pyke Tin, M. Aikawa, I. Kobayashi, Thi Thi Zin
The 19th International Conference on Innovative Computing, Information and Control (ICICIC2025) (Kitakyusyu, Japan) 2025.8.27 ICIC International
Event date: 2025.8.26 - 2025.8.29
Language:English Presentation type:Oral presentation (general)
Venue:Kitakyusyu, Japan Country:Japan
This paper presents a robust and automated system for individual cow detection and tracking in challenging, unsupervised free-flow environments typical of large-scale dairy farms. Addressing the limitations of traditional monitoring and RGB-only computer vision in scenarios with dense animal populations, occlusions, and varying lighting, our system integrates the state-of-the-art YOLOv8 object detection model with the DeepSORT multi-object tracking algorithm. A novel aspect of our approach involves the crucial fusion of RGB and depth camera data, providing a six-channel input to YOLOv8 to enhance spatial understanding and occlusion resolution. Evaluated on a real-world dataset from a Hokkaido dairy farm, the system demonstrates exceptional performance: achieving an average detection accuracy of 99.99% and an average tracking accuracy of 97.8% for correctly tracked cattle. This high accuracy, coupled with the system's ability to maintain individual identities consistently during free-flow transit, highlights its potential to significantly improve precision livestock farming by enabling reliable, continuous monitoring for health, welfare, and productivity analytics.
-
Vision-Driven Detection of Aquatic Animals for Precision Nutritional Control International coauthorship International conference
Aung Si Thu Moe, Kittichon U−TAYNAPUN, Nion CHIRAPONGSATONKUL, Pyke Tin, Thi Thi Zin
The 2nd International Conference on Agricultural Innovation and Natural Resources (Songkhla, Thailand) 2025.8.15 Prince of Songkla University (PSU), Thailand
Event date: 2025.8.14 - 2025.8.15
Language:English Presentation type:Oral presentation (general)
Venue:Songkhla, Thailand Country:Thailand
Aquatic farming is a vital component of Thailand’s agricultural economy, but it faces ongoing challenges in managing aquatic animal nutrition and determining accurate feed requirements. Traditional feeding methods often lead to overfeeding or underfeeding, increasing operational costs and raising environmental concerns. This study introduces a vision-driven approach to enhance precision nutrition management in controlled pond environments. We evaluate the feed preferences of aquatic animals across four types of feed: PSB Saiyai Green, PSB Saiyai Brown, Control, and PSB Saiyai Dark Red using advanced computer vision techniques. A small-scale experimental pond was constructed. A top-mounted camera captures real-time footage across four designated feed regions. Light bulbs ensure consistent illumination for clear visibility. Our system leverages a custom lightweight distillation framework based on the YOLOv11x model to detect and count aquatic animals in each region efficiently and accurately. The analysis delivers actionable insights into feeding behavior and preferences, enabling data-driven, optimized feeding strategies. This method supports the development of smart aquaculture practices, promoting sustainability and improved nutritional management in Thailand's aquatic farming industry.
-
Vision-Based Monitoring of Tilapia Growth Using Machine Learning and Deep Learning Techniques International coauthorship International conference
Pyae Phyo Kyaw, Kittichon U−TAYNAPUN, Nion CHIRAPONGSATONKUL, Pyke Tin, Thi Thi Zin
The 2nd International Conference on Agricultural Innovation and Natural Resources (Songkhla, Thailand) 2025.8.15 Prince of Songkla University (PSU), Thailand
Event date: 2025.8.14 - 2025.8.15
Language:English Presentation type:Oral presentation (general)
Venue:Songkhla, Thailand Country:Thailand
Accurate estimation of fish growth plays a vital role in optimizing aquaculture management and ensuring sustainable production. This study presents a computer vision-based approach, leveraging both machine learning and deep learning techniques, to monitor the growth of African Tilapia using non-invasive image processing methods. High-resolution images of the fish were captured under controlled conditions and analyzed to extract morphological features such as length, width, and body area. Advanced deep learning-based detection and segmentation models were applied to isolate the fish body from the background, while machine learning algorithms were used to quantify key growth indicators. The extracted features were then correlated with actual size, weight and age data through regression analysis to predict fish growth accurately. Experimental results demonstrate that the proposed method achieves high accuracy and consistency while minimizing physical handling and stress on the fish. This image-based system offers a cost-effective, scalable solution for real-time monitoring of fish growth in aquaculture environments.
-
Optimizing Network Message Regulations Using AI-Enhanced Dynamic Programming Methods International conference
Thi Thi Zin, Tunn Cho Lwin, H. Hama, Pyke Tin
IEEE International Conference on Consumer Electronics – Taiwan (ICCE-TW), 2025 (Kaohsiung, Taiwan) 2025.7.17 IEEE Consumer Technology Society
Event date: 2025.7.16 - 2025.7.18
Language:English Presentation type:Oral presentation (general)
Venue:Kaohsiung, Taiwan Country:Taiwan, Province of China
Network message transmission efficiency faces increasing challenges in multi-server systems due to complex traffic patterns and resource allocation demands. This paper presents an AI-enhanced dynamic programming approach for optimizing message flow regulations. By formulating the problem as a Markov Decision Process (MDP) and integrating reinforcement learning techniques, we develop an adaptive framework for network message regulation. Experimental results show our approach achieves 25% reduction in queue length and 30% improvement in resource utilization compared to conventional methods.
-
Behavior Estimation of Calf Groups Using RGB Cameras and Deep Learning International conference
D. Nishimoto, Thi Thi Zin, M. Aikawa
IEEE International Conference on Consumer Electronics – Taiwan (ICCE-TW), 2025 (Kaohsiung, Taiwan) 2025.7.17 IEEE Consumer Technology Society
Event date: 2025.7.16 - 2025.7.18
Language:English Presentation type:Oral presentation (general)
Venue:Kaohsiung, Taiwan Country:Taiwan, Province of China
This paper presents a non-contact, real-time behavior estimation system for calf groups on large-scale farms. Leveraging an RGB camera and deep learning techniques, the proposed method detects calves via YOLO and tracks them using a Hungarian + Weighted IoU + Re-identification framework to maintain consistent IDs. The Segment Anything Model2 is employed to extract calf regions from each frame, and EfficientNetv2-L is used to identify individuals from these regions. By classifying postures (sitting, standing) and detecting specific intake behaviors (drinking milk, drinking water, eating), the system enables comprehensive health monitoring of each calf. Experiments conducted on 16 calves (Holstein and Jersey) achieved 91.33% MOTA in multi-object tracking, approximately 80% accuracy for posture classification, and 50–70% for intake behaviors. Furthermore, the integrated system processes five frames in about 0.70 seconds, meeting real-time requirements. These results suggest that the proposed approach can effectively reduce labor burdens, support early disease detection, and facilitate scalable livestock management.
-
A study on action recognition for the elderly using depth camera International conference
Remon Nakashima, Thi Thi Zin, Hiroki Tamura, Shinji Watanabe, Etsuo Chosa
IEEE International Conference on Consumer Electronics – Taiwan (ICCE-TW), 2025 (Kaohsiung, Taiwan) 2025.7.17 IEEE Consumer Technology Society
Event date: 2025.7.16 - 2025.7.18
Language:English Presentation type:Oral presentation (general)
Venue:Kaohsiung, Taiwan Country:Taiwan, Province of China
In Japan, the rapid aging of the population has exacerbated the shortage of caregiving staff, making the optimization of care environments imperative. Conventional video surveillance methods extract human regions to perform action recognition; however, these approaches often fail to capture detailed motion analysis. In this study, a depth camera-based system is proposed to achieve non-contact, privacy-preserving action recognition using human skeleton recognition. Specifically, human regions are first extracted using bounding box detection (BB), followed by action recognition based on Keypoint-based pose estimation. The estimated Keypoints capture detailed joint positions, and their structural relationships are modeled using a Graph Convolutional Network (GCN). Furthermore, a Transformer is employed to capture the temporal features of the skeletal data. This Keypoint-centric method distinguishes this approach from conventional methods and significantly enhances the granularity of action recognition.
-
Research on Feature Extraction for Prediction of Dystocia in Cows Using Image Processing Technology International conference
T. Murayama, Thi Thi Zin, I. Kobayashi, M. Aikawa
IEEE International Conference on Consumer Electronics – Taiwan (ICCE-TW), 2025 (Kaohsiung, Taiwan) 2025.7.17 IEEE Consumer Technology Society
Event date: 2025.7.16 - 2025.7.18
Language:English Presentation type:Oral presentation (general)
Venue:Kaohsiung, Taiwan Country:Taiwan, Province of China
In dairy farming, an aging operator population and a shortage of successors have led to a decline in the number of farms rearing milking cows, while the number of milking cows per farm is increasing. Under these circumstances, effective calving management has become critical. Calving fatalities cause significant economic losses to dairy operations, and dystocia accounts for approximately 20% of these cases. Without proper intervention during dystocia, the risk of fatal incidents rises and the labor burden on farmers increases. Therefore, there is a strong demand for technology that can detect early signs of calving and reduce accidents. In this study, a ceiling-mounted 360° camera was used to record cow behavior before calving. Quantitative features—including posture changes, tail-up behavior, and walking distance—were extracted and computed to develop indicators effective for predicting the onset of calving and detecting dystocia.