THI THI ZIN

写真a

Affiliation

Engineering educational research section Information and Communication Technology Program 

Title

Professor

Homepage

https://www.cc.miyazaki-u.ac.jp/imagelab/members.html

External Link

Related SDGs


Degree 【 display / non-display

  • Doctor of Engineering ( 2007.3   Osaka City University )

  • Master of Engineering ( 2004.3   Osaka City University )

  • Master of Information Science ( 1999.5   University of Computer Studies, Yangon (UCSY) )

  • B.Sc (Hons) (Mathematics) ( 1995.5   University of Yangon (UY) )

Research Interests 【 display / non-display

  • Image Processing and Its Application

  • 工場での作業の見える化

  • 高度な画像処理技術やAI技術を活用した 研究開発

  • 24-hour monitoring system for the elderly to support independent living

  • ICT Farm Monitoring System

  • Perceptual information processing

Research Areas 【 display / non-display

  • Informatics / Perceptual information processing  / Image Processing

  • Informatics / Database

  • Life Science / Animal production science

 

Papers 【 display / non-display

  • AI-powered visual E-monitoring system for cattle health and wealth Reviewed

    Aung Si Thu Moe, Pyke Tin, M. Aikawa, I. Kobayashi, Thi Thi Zin

    Smart Agricultural Technology   12   2025.12

     More details

    Authorship:Last author, Corresponding author   Language:English   Publishing type:Research paper (scientific journal)   Publisher:Smart Agricultural Technology  

    The livestock industry is experiencing a major transformation through the integration of artificial intelligence (AI) and advanced visual e-monitoring technologies. This study presents an AI-powered cattle health monitoring system that combines real-time computer vision, edge computing, and mobile applications to enhance animal welfare and farm productivity. The system employs a multi-camera setup, comprising RGB, RGB-D, and ToF depth cameras, strategically deployed across four functional zones of a cattle barn: the milking parlor, return lane, feeding area, and resting space. Through integrated deep learning algorithms, the platform performs key health-related tasks, including ear-tag, body-based, and face-based cattle identification, body condition scoring (BCS), lameness detection, feeding time estimation, and real-time localization. A farm-side desktop application processes live video streams from 22 cameras using multiprocessing, maintaining an average latency of 0.62 s per frame per camera. Captured data are stored in a structured MySQL database and accessed via a RESTful API by a user-side mobile application developed using Flutter and Clean Architecture. Experimental evaluation under continuous 24-hour operation demonstrated the system's stability and effectiveness in delivering actionable insights. Cattle identification achieved high accuracies: ear-tag 94.00 %, face-based 93.66 %, body-based 92.80 %, and body-color point cloud 99.55 %. The BCS prediction and lameness detection modules achieved average accuracies of 86.21 % and 88.88 %, respectively. Feedback from veterinarians and farm personnel during pilot testing confirmed its usability and practical relevance. While current limitations include computational demands and the need for improved model robustness, the proposed system establishes a scalable, non-invasive framework for intelligent livestock monitoring. It aligns with broader Green and Digital Transformation (GX and DX) initiatives toward sustainable smart farming practices.

    DOI: 10.1016/j.atech.2025.101300

    Scopus

  • A Study on Machine Learning Approaches for Predicting Fetal pH Level Using Fetal Heart Rate Variability Reviewed International journal

    Cho Nilar Phyo, Tunn Cho Lwin, Pyae Phyo Kyaw, E. Kino, T. Ikenoue, Pyke Tin, Thi Thi Zin

    ICIC Express Letters Part B Applications   16 ( 8 )   879 - 886   2025.8

     More details

    Authorship:Last author, Corresponding author   Language:English   Publishing type:Research paper (scientific journal)   Publisher:Icic Express Letters Part B Applications  

    Fetal well-being monitoring system is essential for ensuring healthy labor outcomes. One of the non-invasive methods for assessing fetal health during labor and delivery is by analyzing fetal heart rate variability (FHRV), which can be used to predict fetal pH levels. This study compares different machine learning approaches for predicting fetal pH levels based on FHRV data collected during labor and delivery. The dataset used in this study includes FHRV signals together with corresponding umbilical cord blood gas measurements such as pH, which are used to train and evaluate the models. This study applies several machine learning algorithms and evaluates their performance using key metrics such as sensitivity, specificity, precision, F1-score, and accuracy. These metrics help to determine which model is the most accurate predicting fetal pH levels based on FHRV characteristics. The results reveal that the support vector machine (SVM) model outperforms with the accuracy of 81.67%, better than the other algorithms in predicting fetal pH levels. The findings of this study aim to contribute to the development of more reliable and accurate prediction models for assessing fetal well-being during labor, enhanced clinical decision-making, allowing for timely interventions and improved healthy labor outcomes for both the mother and the baby.

    DOI: 10.24507/icicelb.16.08.879

    Scopus

  • Machine Learning-Based Prediction of Cattle Body Condition Score using 3D Point Cloud Surface Features Reviewed

    Pyae Phyo Kyaw, Thi Thi Zin, Pyke Tin, M. Aikawa, I. Kobayashi

    Proceedings of SPIE the International Society for Optical Engineering   13701   2025.7

     More details

    Authorship:Corresponding author   Language:English   Publishing type:Research paper (international conference proceedings)   Publisher:Proceedings of SPIE the International Society for Optical Engineering  

    Body Condition Score (BCS) of dairy cattle is a crucial indicator of their health, productivity, and reproductive performance throughout the production cycle. Recent advancements in computer vision techniques have led to the development of automated BCS prediction systems. This paper proposes a BCS prediction system that leverages 3D point cloud surface features to enhance accuracy and reliability. Depth images are captured from a top-view perspective and processed using a hybrid depth image detection model to extract the cattle’s back surface region. The extracted depth data is converted into point cloud data, from which various surface features are analyzed, including normal vectors, curvature, point density, and surface shape characteristics (planarity, linearity, and sphericity). Additionally, Fast Point Feature Histograms (FPFH), triangle mesh area, and convex hull area are extracted and evaluated using three optimized machine learning models: Random Forest (RF), K-Nearest Neighbors (KNN), and Gradient Boosting (GB). Model performance is assessed using different tolerance levels and error metrics, including Mean Absolute Error (MAE) and Mean Absolute Percentage Error (MAPE). Among the models, Random Forest demonstrates the highest performance, achieving accuracy rates of 51.36%, 86.21%, and 97.83% at 0, 0.25, and 0.5 tolerance levels, respectively, with an MAE of 0.161 and MAPE of 5.08%. This approach enhances the precision of BCS estimation, offering a more reliable and automated solution for dairy cattle monitoring and health management.

    DOI: 10.1117/12.3070481

    Scopus

  • Minimizing Resource Usage for Real-Time Network Camera Tracking of Black Cows Reviewed

    Aung Si Thu Moe, Thi Thi Zin, Pyke Tin, M. Aikawa, I. Kobayashi

    Proceedings of SPIE the International Society for Optical Engineering   13701   2025.7

     More details

    Authorship:Corresponding author   Language:English   Publishing type:Research paper (international conference proceedings)   Publisher:Proceedings of SPIE the International Society for Optical Engineering  

    Livestock plays a crucial role in the farming industry to meet consumer demand. A livestock monitoring system helps track animal health while reducing labor requirements. Most livestock farms are small, family-owned operations. This study proposes a real-time black cow detection and tracking system using network cameras in memory and disk constrained environments. We employ the Detectron2 Mask R-CNN ResNeXt-101 model for black cow region detection and the ByteTrack algorithm for tracking. ByteTrack tracks multiple objects by associating each detection box. Unlike other deep learning tracking algorithms that use multiple features such as texture, color, shape, and size. ByteTrack effectively reduces tracking ID errors and ID switches. Detecting and tracking black cows in real-time is challenging due to their uniform color and similar sizes. To optimize performance on low-specification machines, we apply ONNX (Open Neural Network Exchange) to the Detectron2 detection model for optimization and quantization. The system processes input images from network cameras, enhances color during preprocessing, and detects and tracks black cows efficiently. Our system achieves 95.97% mAP@0.75 detection accuracy and 97.16 % in daytime video and 94.83 % in nighttime accuracy of tracking are effectively tracks individual black cows, minimizing duplicate IDs and improving tracking after missed detections or occlusions. The system is designed to operate on machines with minimal hardware requirements.

    DOI: 10.1117/12.3070347

    Scopus

  • Automatic cattle identification system based on color point cloud using hybrid PointNet++ Siamese network Reviewed International journal

    Pyae Phyo Kyaw, Pyke Tin, M. Aikawa, I. Kobayashi, Thi Thi Zin

    Scientific reports   15 ( 21938 (2025) )   21938   2025.7

     More details

    Authorship:Last author, Corresponding author   Language:English   Publishing type:Research paper (scientific journal)   Publisher:Scientific Reports  

    Cattle health monitoring and management systems are essential for farmers and veterinarians, as traditional manual health checks can be time-consuming and labor-intensive. A critical aspect of such systems is accurate cattle identification, which enables effective health monitoring. Existing 2D vision-based identification methods have demonstrated promising results; however, their performance is often compromised by environmental factors, variations in cattle texture, and noise. Moreover, these approaches require model retraining to recognize newly introduced cattle, limiting their adaptability in dynamic farm environments. To overcome these challenges, this study presents a novel cattle identification system based on color point clouds captured using RGB-D cameras. The proposed approach employs a hybrid detection method that first applies a 2D depth image detection model before converting the detected region into a color point cloud, allowing for robust feature extraction. A customized lightweight tracking approach is implemented, leveraging Intersection over Union (IoU)-based bounding box matching and mask size analysis to consistently track individual cattle across frames. The identification framework is built upon a hybrid PointNet ++ Siamese Network trained with a triplet loss function, ensuring the extraction of discriminative features for accurate cattle identification. By comparing extracted features against a pre-stored database, the system successfully predicts cattle IDs without requiring model retraining. The proposed method was evaluated on a dataset consisting predominantly of Holstein cow along with a few Jersey cows, achieving an average identification accuracy of 99.55% over a 13-day testing period. Notably, the system can successfully detect and identify unknown cattle without requiring model retraining. This cattle identification research aims to integrate the comprehensive cattle health monitoring system, encompassing lameness detection, body condition score evaluation, and weight estimation, all based on point cloud data and deep learning techniques.

    DOI: 10.1038/s41598-025-08277-8

    Scopus

    PubMed

display all >>

Books 【 display / non-display

MISC 【 display / non-display

  • Preface International coauthorship

    Pan J.S., Thi Thi Zin, Sung T.W., Lin J.C.W.

    Lecture Notes in Electrical Engineering   1322 LNEE   v - vii   2025

     More details

    Authorship:Corresponding author   Language:English   Publishing type:Rapid communication, short report, research note, etc. (scientific journal)   Publisher:Lecture Notes in Electrical Engineering  

    Scopus

  • A study on Depth Camera-Based Estimation of Elderly Patient Actions

    Remon NAKASHIMA, Thi Thi Zin, Kazuhiro KONDO and Shinji Watanabe

    37   46 - 52   2024.12

     More details

    Authorship:Corresponding author   Language:Japanese   Publishing type:Research paper, summary (national, other academic conference)   Publisher:Biomedical Fuzzy Systems Association  

  • A Study on the Possibility of Distinguishing between Parkinson's disease and Essential Tremor using Motor Symptoms Observed by an RGB camera

    Proceedings of the 35th Annual Conference of Biomedical Fuzzy Systems Association (BMFSA2022)   2022.12

     More details

    Authorship:Corresponding author   Language:Japanese   Publishing type:Research paper, summary (national, other academic conference)   Publisher:Biomedical Fuzzy Systems Association  

  • Tracking A Group of Black Cows Using SORT based Tracking Algorithm

    Cho Cho Aye, Thi Thi Zin, M. Aikawa, I. Kobayashi

    第 35 回バイオメディカル・ファジィ・システム学会年次大会 講演論文集 (BMFSA2022)   2022.12

     More details

    Authorship:Corresponding author   Language:English   Publishing type:Research paper, summary (national, other academic conference)   Publisher:バイオメディカル・ファジィ・システム学会  

  • Artificial Intelligence Topping on Spectral Analysis for Lameness Detection in Dairy Cattle

    Thi Thi Zin, Ye Htet, San Chain Tun and Pyke Tin

    第 35 回バイオメディカル・ファジィ・システム学会年次大会 講演論文集 (BMFSA2022)   2022.12

     More details

    Authorship:Lead author, Corresponding author   Language:English   Publishing type:Research paper, summary (national, other academic conference)   Publisher:バイオメディカル・ファジィ・システム学会  

display all >>

Presentations 【 display / non-display

  • Advancing Neonatal Monitoring Using Heart Rate Variability with Machine Learning Models International conference

    Tunn Cho Lwin, Thi Thi Zin, Pyke Tin, E. Kino and T. Ikenoue

    The Seventh International Conference on Smart Vehicular Technology, Transportation, Communication and Applications (VTCA 2025)  (Fuzhou, Fujian, China)  2025.11.22  Technically sponsored by Southwest Jiaotong University and Technology and Nanchang Institute of Technology

     More details

    Event date: 2025.11.21 - 2025.11.23

    Language:English   Presentation type:Oral presentation (general)  

    Venue:Fuzhou, Fujian, China   Country:China  

    Accurate assessment of neonatal respiratory status is critical for early intervention and improved clinical outcomes. Umbilical cord blood pressure of carbon dioxide (PCO2) is a key marker of respiratory efficiency, but its measurement requires invasive sampling. This study proposes a non-invasive, machine learning–based framework to predict abnormal PCO2 levels using fetal heart rate variability (FHRV) features. Seven HRV features were initially extracted, and Principal Component Analysis identified M, S, and entropy as the most informative for classification. Patients were divided into normal (G2) and abnormal (G1) groups based on a PCO2 threshold of 35 mmHg. To address class imbalance, oversampling was applied to the training dataset. Classification experiments with SVM (linear and Gaussian) and k-nearest neighbor (kNN) classifiers demonstrated that oversampling improved sensitivity for the minority abnormal group while maintaining high precision for the majority normal group. On the testing dataset, kNN achieved the most balanced performance, with 85% precision and 83% recall for abnormal cases. These results highlight the potential of combining HRV analysis with ma-chine learning to provide continuous, non-invasive, and real-time monitoring of neonatal respiratory status, offering a promising tool to guide clinical decision-making and reduce dependence on invasive procedures.

  • Digital Cattle Twins: Revolutionizing Calving Management Through Markovian Prediction Systems International conference

    Thi Thi Zin, Tunn Cho Lwin, Aung Si Thu Moe, Pyae Phyo Kyaw, M. Aikawa and Pyke Tin

    The Seventh International Conference on Smart Vehicular Technology, Transportation, Communication and Applications (VTCA 2025)  (Fuzhou, Fujian, China)  2025.11.22  Technically sponsored by Southwest Jiaotong University and Technology and Nanchang Institute of Technology

     More details

    Event date: 2025.11.21 - 2025.11.23

    Language:English   Presentation type:Oral presentation (general)  

    Venue:Fuzhou, Fujian, China   Country:China  

    The integration of digital twin technology with livestock management intro-duces new possibilities in precision livestock farming. Our research proposes the Digital Cattle Twin (DCT) system, a transformative approach to managing cattle calving during the critical periparturient period. This system merges Markovian modeling with real-time visual monitoring to enhance predictive accuracy in calving management. By modeling calving as a sequence of interconnected states within a Markov chain, the DCT predicts progression from early labor to postpartum recovery with high precision. Real-time prob-ability calculations enable early detection of complications and optimal intervention timing. The system integrates diverse data streams, including vaginal temperature sensors for pre-calving temperature drops, AI-based video analysis for behavioral and movement changes, heart rate variability for stress detection, and spatial tracking for calving readiness. A predictive analytics engine processes this multimodal data, achieving high accuracy in detecting risks. The DCT’s adaptive learning architecture refines predictions using both individual and herd-level patterns, enabling a proactive rather than reactive management approach. Beyond calving, this framework illustrates how mathematical modeling and digital twins can redefine livestock management, opening pathways for broader applications in animal health, welfare, and production optimization.

  • An End-to-End Computer Vision Pipeline for Cow Ear Tag Number Recognition Using YOLOv11 and a Hybrid EfficientNet-NRTR Model International conference

    San Chain Tun, Pyke Tin, M. Aikawa, I. Kobayashi and Thi Thi Zin

    The 9th International Conference on Information Technology (InCIT2025)  (Phuket, Thailand)  2025.11.13  IEEE Thailand Section (IEEE Computer Society Thailand Chapter)

     More details

    Event date: 2025.11.12 - 2025.11.14

    Language:English   Presentation type:Oral presentation (general)  

    Venue:Phuket, Thailand   Country:Thailand  

    Automated identification of individual livestock is a critical component of precision livestock farming. This study presents a robust, real-time system for recognizing four-digit ear tag numbers on cows using a multi-stage pipeline. The pipeline consists of ROI extraction, YOLOv11-based detection and instance segmentation of cow heads and ear tags, a customized tracking algorithm for persistent identity assignment, and an NRTR-based OCR model with EfficientNet backbones for number recognition. The customized tracker leverages Intersection over Union (IoU), frame-holding, and bounding box position logic to handle missed detections and ensure accurate tracking. The OCR model predicts digits 0-9 and uses "x" for unknown characters, providing reliable sequence recognition from cropped ear tag images. The system was evaluated on a real-world dataset collected over five days on a dairy farm. The overall detection and tracking accuracy achieved 96.18%, while OCR accuracy for EfficientNet backbones B4 to B7 reached 91.54%, 93.85%, 93.08%, and 95.38%, respectively. Results demonstrate high accuracy and robustness across all stages, confirming the practical viability of the approach. This integrated system offers a scalable solution for automated cattle identification and monitoring in operational farm environments.

  • Deep Sequential Gait Feature Learning for Long-Term Person Re-Identification in Real-World Environments International conference

    Cho Nilar Phyo, Thi Thi Zin and Pyke Tin

    The 9th International Conference on Information Technology (InCIT2025)  (Phuket, Thailand and Online)  2025.11.13  IEEE Thailand Section (IEEE Computer Society Thailand Chapter)

     More details

    Event date: 2025.11.12 - 2025.11.14

    Language:English   Presentation type:Oral presentation (general)  

    Venue:Phuket, Thailand and Online   Country:Thailand  

    This paper presents a novel gait-based framework for long-term person re-identification in real-world environments. Unlike appearance-based methods, which are often sensitive to illumination, clothing changes, and occlusion, our approach leverages gait dynamics captured via dense optical flow and deep feature learning. We integrate ResNet101 for spatial feature extraction and an LSTM network for temporal sequence modeling, enabling robust representation of human walking patterns across extended time periods. The experimental results on gait datasets demonstrate that the proposed system achieves good recognition in term of accuracy, mean Average Precision (mAP) and recall, and stability under challenging real-world conditions, highlighting its potential for surveillance and security applications.

  • Estimation of Cows Weight using a Depth Camera International conference

    S. Araki, K. Shiiya, Thi Thi Zin, I. Kobayashi

    2025 IEEE 14th Global Conference on Consumer Electronics (GCCE2025)  (Osaka, Japan)  2025.9.25  IEEE Consumer Technology Society

     More details

    Event date: 2025.9.23 - 2025.9.26

    Language:English   Presentation type:Oral presentation (general)  

    Venue:Osaka, Japan   Country:Japan  

    Traditional methods for measuring cattle weight require special equipment and often involve physical contact with the animals, increasing the risk of accidents. As the workload for dairy farmers grows due to a decreasing workforce, there is a strong need for safer and more efficient solutions. In this study, we propose a contactless method to estimate cattle weight using a depth camera. This study differentiates itself from other studies by placing the camera above the cow, making it more versatile. We extracted depth images and calculated key body measurements: height, body length, and belly width. Based on these values, we created a
    regression formula to estimate weight. Our results show that it is possible to estimate cattle weight roughly using only the values obtained from depth images. This method reduces the risk of injuries during measurement and offers a more efficient way to manage cattle health and nutrition.

display all >>

Awards 【 display / non-display

  • IEEE GCCE 2025 Excellent Student Paper Award (Outstanding Prize)

    2025.9   2025 IEEE 14th Global Conference on Consumer Electronics (GCCE 2025)   A Conceptual Framework for Neonatal Motor Activity Monitoring Using Digital Twin Technology and Computer Vision: A Preliminary Study

    Remon Nakashima, Thi Thi Zin and Yuki Kodama

     More details

    Award type:Award from international society, conference, symposium, etc.  Country:Japan

    Abstract—Continuous non‑contact monitoring of neonatal motor activity in the neonatal intensive care unit (NICU) is crucial for early detection of neurological disorders and for guiding timely clinical interventions. We introduce an infrared‑driven skeleton‑estimation prototype designed for real‑time operation that generates a live virtual “digital twin” of the infant’s posture to support clinician assessment. A deep‑learning pose model was fine‑tuned on a bespoke infrared key‑point dataset, and three motion‑quantification filters were evaluated: raw differencing (Method A), center‑aligned suppression (Method B), and a newly proposed skeleton template‑matching filter (Method C). Tests on a life‑sized neonatal mannequin confirmed centimetric joint‑localization accuracy, reliable detection of 50‑pixel hand displacements, and reduction of simulated camera‑shake artifacts to within five pixels. Building on these results, a follow‑up evaluation on pre‑term neonates showed that Method C suppressed static key‑point noise by 78 % while preserving physiological motion. This combined mannequin and in‑vivo evidence demonstrates the clinical feasibility of our infrared digital‑twin framework and establishes a foundation for automated assessment of pre‑term motor development.

  • ORAL PRESENTATION AWARD

    2025.9   2025 IEEE 14th Global Conference on Consumer Electronics (GCCE 2025)   A Conceptual Framework for Neonatal Motor Activity Monitoring Using Digital Twin Technology and Computer Vision: A Preliminary Study

    Remon Nakashima, Thi Thi Zin and Yuki Kodama

     More details

    Award type:Award from international society, conference, symposium, etc.  Country:Japan

    Abstract—Continuous non‑contact monitoring of neonatal motor activity in the neonatal intensive care unit (NICU) is crucial for early detection of neurological disorders and for guiding timely clinical interventions. We introduce an infrared‑driven skeleton‑estimation prototype designed for real‑time operation that generates a live virtual “digital twin” of the infant’s posture to support clinician assessment. A deep‑learning pose model was fine‑tuned on a bespoke infrared key‑point dataset, and three motion‑quantification filters were evaluated: raw differencing (Method A), center‑aligned suppression (Method B), and a newly proposed skeleton template‑matching filter (Method C). Tests on a life‑sized neonatal mannequin confirmed centimetric joint‑localization accuracy, reliable detection of 50‑pixel hand displacements, and reduction of simulated camera‑shake artifacts to within five pixels. Building on these results, a follow‑up evaluation on pre‑term neonates showed that Method C suppressed static key‑point noise by 78 % while preserving physiological motion. This combined mannequin and in‑vivo evidence demonstrates the clinical feasibility of our infrared digital‑twin framework and establishes a foundation for automated assessment of pre‑term motor development.

  • Best Presentation Award

    2025.8   The 19th International Conference on Innovative Computing, Information and Control (ICICIC2025)   Depth Camera-Based Analysis of Elderly Behavior for Risk Detection Using Skeletal Data

    Remon Nakashima, Thi Thi Zin, H. Tamura, S. Watanabe

     More details

    Award type:Award from international society, conference, symposium, etc.  Country:Japan

    We present a non-contact, privacy-preserving monitoring system that estimates behavioral risk in elderly-care rooms using depth cameras. First, each video frame is processed to detect individuals and extract 13 skeletal keypoints via a YOLO-based person detector and pose estimator. These keypoints are fed into a two-stage model comprising a graph convolutional network (GCN) and a Transformer encoder, which capture spatial and temporal movement patterns. To contextualize actions, we apply semantic segmentation to identify key regions such as beds and chairs. A rule-based framework then integrates action predictions with spatial overlap between keypoints and environment masks to assign one of three risk levels: Safe, Attention, or Danger. For robustness, we apply temporal smoothing and fuse outputs from two depth cameras. Finally, we design and implement a lightweight graphical user interface (GUI) to visualize risk levels and issue real-time alerts. Experimental results show an overall accuracy of 89.8 % and a hazard-detection accuracy of 74.3 %.

  • Silver Award of the Best Oral Presentation

    2025.8   The 2nd International Conference on Agricultural Innovation and Natural Resources   Vision-Driven Detection of Aquatic Animals for Precision Nutritional Control

    Aung Si Thu Moe, Kittichon U-TAYNAPUN, Nion CHIRAPONGSATONKUL, Pyke Tin, Thi Thi Zin

     More details

    Award type:Award from international society, conference, symposium, etc.  Country:Thailand

    Aquatic farming is a vital component of Thailand’s agricultural economy, but it faces ongoing challenges in managing aquatic animal nutrition and determining accurate feed requirements. Traditional feeding methods often lead to overfeeding or underfeeding, increasing operational costs and raising environmental concerns. This study introduces a vision-driven approach to enhance precision nutrition management in controlled pond environments. We evaluate the feed preferences of aquatic animals across four types of feed: PSB Saiyai Green, PSB Saiyai Brown, Control, and PSB Saiyai Dark Red using advanced computer vision techniques. A small-scale experimental pond was constructed. A top-mounted camera captures real-time footage across four designated feed regions. Light bulbs ensure consistent illumination for clear visibility. Our system leverages a custom lightweight distillation framework based on the YOLOv11x model to detect and count aquatic animals in each region efficiently and accurately. The analysis delivers actionable insights into feeding behavior and preferences, enabling data-driven, optimized feeding strategies. This method supports the development of smart aquaculture practices, promoting sustainability and improved nutritional management in Thailand's aquatic farming industry.

  • Best Presentation Award

    2025.7   The 12th IEEE International Conference on Consumer Electronics – Taiwan   A study on action recognition for the elderly using depth camera

    Remon Nakashima, Thi Thi Zin, H. Tamura, S. Watanabe, E. Chosa

     More details

    Award type:Award from international society, conference, symposium, etc.  Country:Taiwan, Province of China

    In this study, a depth camera-based system is proposed to achieve non-contact, privacy- preserving action recognition using human skeleton recognition. Specifically, human regions are first extracted using bounding box (BB) detection, followed by action recognition based on Keypoint-based pose estimation. The estimated Keypoints capture detailed joint positions, and their structural relationships are modeled with a Graph Convolutional Network (GCN). Furthermore, a Transformer is employed to capture the temporal features of the skeletal data. This Keypoint-centric pipeline differentiates our approach from conventional, silhouette-level methods and significantly enhances the granularity of action recognition.

display all >>

Grant-in-Aid for Scientific Research 【 display / non-display

  • Enhanced AI-Driven Image Analysis for Early Mycoplasma Detection in Dairy Calves for innovations in Livestock Health Management

    Grant number:25K15232  2025.04 - 2028.03

    独立行政法人日本学術振興会  科学研究費補助金  基盤研究(C)(一般)

      More details

    Authorship:Coinvestigator(s) 

     畜産は全国農業総生産額の3 割以上を占める重要な産業であるが、不適切な家畜管理による生産性の低下が大きな問題となっている。その主たる原因は飼養形態の変化による1 頭あたり観察時間の短縮であり、飼養頭数の多頭化・農家の高齢化が進む畜産現場において、365 日24 時間にわたり家畜の異常や変化を観察し続けることは困難である。
     申請者らは、主に非接触・非侵襲センサ情報のアルゴリズム解析技術に着目し、距離画像とビデオ画像を用いて牛の発情を検知できる独自アルゴリズムの開発に取り組んできた。本研究では、これらの技術を応用することで、牛の発情や分娩監視時の異常を自動検知できる省力的な24 時間
    家畜管理システムを開発する。

  • AIと画像データ解析を活用した牛の摂食行動モニタリングによる持続可能な酪農の実現

    Grant number:25K15158  2025.04 - 2028.03

    独立行政法人日本学術振興会  科学研究費補助金  基盤研究(C)(一般)

      More details

    Authorship:Principal investigator 

     畜産は全国農業総生産額の3 割以上を占める重要な産業であるが、不適切な家畜管理による生産性の低下が大きな問題となっている。その主たる原因は飼養形態の変化による1 頭あたり観察時間の短縮であり、飼養頭数の多頭化・農家の高齢化が進む畜産現場において、365 日24 時間にわたり家畜の異常や変化を観察し続けることは困難である。
     申請者らは、主に非接触・非侵襲センサ情報のアルゴリズム解析技術に着目し、距離画像とビデオ画像を用いて牛の発情を検知できる独自アルゴリズムの開発に取り組んできた。本研究では、これらの技術を応用することで、牛の発情や分娩監視時の異常を自動検知できる省力的な24 時間
    家畜管理システムを開発する。

  • 牛の分娩監視システムに関する研究

    Grant number:18J14542  2018.04 - 2020.03

    科学研究費補助金  特別研究員奨励費

    須見 公祐、Thi Thi Zin(受入研究者)

      More details

    Authorship:Coinvestigator(s) 

    精度や耐久性が不十分な割に高価なウェラブル型センサの装着や、肉体的・精神的に大きな負担を強いられる目視によるカメラ映像のモニタリング等は、大規模化する畜産現場において現実的なコストで利用できるものが極めて少ない。そこで本研究では、監視カメラから得られる映像を用いて非接触型の分娩管理システムを開発することで、農家そして牛、両方の負担を減らすことを目的とする。
    本来、牛は牛群と呼ばれるグループで行動を行う。そして、分娩が間近になると分娩室という分娩専用の牛舎に移される。分娩室には2 頭以上を同時に入れるケースも多く、どの牛で分娩が始まったかを識別する必要があることから、個体識別と追跡処理が必要となる。次に、分娩行動の段階を追って検知を行う。抽出する特徴としては、尻尾が上がっているかどうか、牛が立っているか座っているか、落ち着きがなくなり移動量が増加するか、子牛を出産したかどうか、親牛が子牛を舐めているかどうかなど、それぞれの過程で自動的に異常を見つけ通報を行うアルゴリズムの開発を進める。分娩行動が起きたかどうかの判断は、これらのデータから各特徴の重要度(重み)を学習させることによって行う。そして、最終目標として難産など異常行動の検知を行うために事例を蓄積しながら知識ベースを充実させ、異常事態の検知を行い、分娩の各段階を監視して異常事態の検知ならびに通報が可能なシステムの開発を目指す。

  • 画像処理技術と非接触センサを用いた牛の発情検知及び分娩監視システムの開発

    Grant number:17K08066  2017.04 - 2021.03

    科学研究費補助金  基盤研究(C)

      More details

    Authorship:Principal investigator 

     畜産は全国農業総生産額の3 割以上を占める重要な産業であるが、不適切な家畜管理による生産性の低下が大きな問題となっている。その主たる原因は飼養形態の変化による1 頭あたり観察時間の短縮であり、飼養頭数の多頭化・農家の高齢化が進む畜産現場において、365 日24 時間にわたり家畜の異常や変化を観察し続けることは困難である。
     申請者らは、主に非接触・非侵襲センサ情報のアルゴリズム解析技術に着目し、距離画像とビデオ画像を用いて牛の発情を検知できる独自アルゴリズムの開発に取り組んできた。本研究では、これらの技術を応用することで、牛の発情や分娩監視時の異常を自動検知できる省力的な24 時間
    家畜管理システムを開発する。

  • Development of forensic imaging modality for person identification using integration method of feature correspondences between heterogeneous images

    Grant number:15K15457  2015.04 - 2018.03

    Grant-in-Aid for Scientific Research  Grant-in-Aid for challenging Exploratory Research

      More details

    Authorship:Coinvestigator(s) 

display all >>

Available Technology 【 display / non-display