THI THI ZIN

写真a

Affiliation

Engineering educational research section Information and Communication Technology Program 

Title

Professor

Homepage

https://www.cc.miyazaki-u.ac.jp/imagelab/members.html

External Link

Related SDGs


Degree 【 display / non-display

  • Doctor of Engineering ( 2007.3   Osaka City University )

  • Master of Engineering ( 2004.3   Osaka City University )

  • Master of Information Science ( 1999.5   University of Computer Studies, Yangon (UCSY) )

  • B.Sc (Hons) (Mathematics) ( 1995.5   University of Yangon (UY) )

Research Interests 【 display / non-display

  • Image Processing and Its Application

  • 工場での作業の見える化

  • 高度な画像処理技術やAI技術を活用した 研究開発

  • 24-hour monitoring system for the elderly to support independent living

  • ICT Farm Monitoring System

  • Perceptual information processing

Research Areas 【 display / non-display

  • Informatics / Perceptual information processing  / Image Processing

  • Informatics / Database

  • Life Science / Animal production science

 

Papers 【 display / non-display

  • Non-contact Monitoring of Dystocia in Dairy Cows Using Keypoint Detection and Semantic Segmentation Reviewed International journal

    T. Murayama, Thi Thi Zin, I. Kobayashi, M. Aikawa

    The 2026 IEEE International Conference on Consumer Technology – Pacific (ICCT-Pacific 2026)   2026.3

     More details

    Authorship:Corresponding author   Language:English   Publishing type:Research paper (international conference proceedings)   Publisher:IEEE  

    In the dairy industry, labor shortages and the economic losses caused by calving accidents are significant issues. To address these problems, we propose a non-contact monitoring system using 360-degree cameras and deep learning techniques. This study focuses on constructing an automated workflow that detects cows, estimates their poses (standing or lying), and tracks individuals without attaching sensors to the animals. We employed YOLO11 for cow detection and keypoint extraction, and compared three models for pose estimation: Multilayer Perceptron (MLP), Gated Recurrent Unit (GRU), and Semantic Segmentation (Deeplabv3+). The experimental results showed that YOLO11 achieved a high detection accuracy (mAP@0.50: 99.47%) for bounding boxes. For pose estimation, the semantic segmentation approach with a ResNet101 backbone achieved the highest accuracy of 85.1%, outperforming keypoint-based methods. These results demonstrate the potential of the proposed system for basic behavioral monitoring in calving barns.

  • A Study on Supporting Neurocognitive Disorder Assessment for Deaf Individuals Using a Sign Language Recognition System Reviewed International journal

    N. Shibahara, Thi Thi Zin, S. Ito, N. Takahashi, N. Takemoto

    The 2026 IEEE International Conference on Consumer Technology – Pacific (ICCT-Pacific 2026)   2026.3

     More details

    Authorship:Corresponding author   Language:English   Publishing type:Research paper (international conference proceedings)   Publisher:IEEE  

    The Mini Mental State Examination (MMSE) is widely used for screening Neurocognitive Disorder (NCD); however, ensuring diagnostic accuracy for Deaf individuals remains a challenge due to factors such as the potential subjectivity and translation errors introduced by sign language interpreters. To address this issue, this study proposes an automated MMSE scoring system employing Japanese Sign Language (JSL) recognition based on skeletal keypoints. The proposed method utilizes MediaPipe Pose and Hands to extract feature points from examination videos and employs a Long Short-Term Memory (LSTM) model to classify sign language responses. Evaluation results using 5-fold cross-validation on a dataset of Deaf individuals demonstrated a high average classification accuracy of 92.75%. Furthermore, the system successfully performed automated scoring compliant with the MMSE protocol. These results indicate that the proposed system can enable objective cognitive assessment without interpreter intervention, thereby contributing to more accurate diagnoses for Deaf individuals.

  • Signal-based feature analysis of behavioral trajectories for predicting calving time and classifying assistance needs Reviewed International journal

    Wai Hnin Eaindrar Mg, Pyke Tin, M. Aikawa, K. Honkawa, Y. Horii, Thi Thi Zin

    Computers and Electronics in Agriculture   243   2026.3

     More details

    Authorship:Last author, Corresponding author   Language:English   Publishing type:Research paper (scientific journal)   Publisher:Elsevier B.V.  

    Accurately predicting calving time and recognizing when a cow needs help during delivery are essential for effective livestock management. These factors directly influence animal welfare, how labor is distributed on the farm, and overall productivity. Without close monitoring, calving complications can lead to serious health issues or even death for the cattle. Moreover, delayed assistance during difficult births (dystocia) can significantly harm both the cow and the calf. These problems remain challenging due to the subtle and highly variable nature of cattle behavior, especially within large-scale farming environments where continuous manual monitoring is impractical. This research proposes a fully vision-based, non-invasive system that relies solely on cattle trajectory data derived from images to address these challenges. To analyze signal-based behavioral trajectories associated with calving, we applied three signal-based image processing techniques aimed at predicting calving time and identifying individuals likely to require human assistance during parturition. Our system allows for continuous, automated monitoring using four surveillance cameras eliminating the need for wearable sensors or invasive equipment. We employed three analytical approaches such as amplitude analysis, frequency analysis, and power spectral density analysis (PSD) to interpret cattle movement patterns from camera-derived trajectory data. For predicting calving time, our system achieved 100 % accuracy across all methods. Specifically, the amplitude analysis predicted calving within 9 h, the frequency analysis provided predictions within 5 h, and the PSD analysis predicted calving within 6 h. Moreover, in classifying cattle requiring human assistance during parturition, our system achieved accuracy of 60 %, 60 %, and 65 % for the amplitude, frequency, and PSD analyses, respectively. Unlike conventional methods that rely on wearable sensors, manual observation, or AI models requiring extensive training, our prediction system operates without any model training phase, instead directly analyzing motion patterns from trajectory data to generate predictions. This makes our prediction simpler, more interpretable, and highly scalable, offering a practical and robust solution for improving livestock monitoring and timely intervention in modern farming environments. This work paves the way for further development of automated, non-invasive livestock monitoring technologies.

    DOI: 10.1016/j.compag.2025.111301

    Scopus

    Other Link: https://www.sciencedirect.com/science/article/pii/S0168169925014073

  • 3D Camera-Based Estimation of Cattle Body Weight Reviewed International journal

    S. Araki, K. Shiiya, Thi Thi Zin, I. Kobayashi

    IEEE Conference Proceedings: 2025 IEEE 14th Global Conference on Consumer Electronics (GCCE)   1196 - 1197   2025.12

     More details

    Language:English   Publishing type:Research paper (international conference proceedings)   Publisher:IEEE  

    Traditional methods for measuring cattle weight require special equipment and often involve physical contact with the animals, increasing the risk of accidents. As the workload for dairy farmers grows due to a decreasing workforce, there is a strong need for safer and more efficient solutions. In this study, we propose a contactless method to estimate cattle weight using a depth camera. This study differentiates itself from other studies by placing the camera above the cow, making it more versatile. We extracted depth images and calculated key body measurements: height, body length, and belly width. Based on these values, we created a regression formula to estimate weight. Our results show that it is possible to estimate cattle weight roughly using only the values obtained from depth images. This method reduces the risk of injuries during measurement and offers a more efficient way to manage cattle health and nutrition.

    DOI: 10.1109/GCCE65946.2025.11275084

  • A Conceptual Framework for Neonatal Motor Activity Monitoring Using Digital Twin Technology and Computer Vision: A Preliminary Study Reviewed International journal

    R. Nakashima, H. Matsumoto, Thi Thi Zin, Y. Kodama

    IEEE Conference Proceedings: 2025 IEEE 14th Global Conference on Consumer Electronics (GCCE)   1320 - 1321   2025.12

     More details

    Authorship:Corresponding author   Language:English   Publishing type:Research paper (international conference proceedings)   Publisher:IEEE  

    Continuous non-contact monitoring of neonatal motor activity in the neonatal intensive care unit (NICU) is crucial for early detection of neurological disorders and for guiding timely clinical interventions. We introduce an infrared-driven skeleton-estimation prototype designed for real-time operation that generates a live virtual "digital twin" of the infant’s posture to support clinician assessment. A deep-learning pose model was fine-tuned on a bespoke infrared key-point dataset, and three motion-quantification filters were evaluated: raw differencing (Method A), center-aligned suppression (Method B), and a newly proposed skeleton template-matching filter (Method C). Tests on a life-sized neonatal mannequin confirmed centimetric joint-localization accuracy, reliable detection of 50-pixel hand displacements, and reduction of simulated camera-shake artifacts to within five pixels. Building on these results, a follow-up evaluation on pre-term neonates showed that Method C suppressed static key-point noise by 78 % while preserving physiological motion. This combined mannequin and in-vivo evidence demonstrates the clinical feasibility of our infrared digital-twin framework and establishes a foundation for automated assessment of pre-term motor development.

    DOI: 10.1109/GCCE65946.2025.11275120

display all >>

Books 【 display / non-display

MISC 【 display / non-display

  • Preface International coauthorship

    Pan J.S., Thi Thi Zin, Sung T.W., Lin J.C.W.

    Lecture Notes in Electrical Engineering   1322 LNEE   v - vii   2025

     More details

    Authorship:Corresponding author   Language:English   Publishing type:Rapid communication, short report, research note, etc. (scientific journal)   Publisher:Lecture Notes in Electrical Engineering  

    Scopus

  • A study on Depth Camera-Based Estimation of Elderly Patient Actions

    Remon NAKASHIMA, Thi Thi Zin, Kazuhiro KONDO and Shinji Watanabe

    37   46 - 52   2024.12

     More details

    Authorship:Corresponding author   Language:Japanese   Publishing type:Research paper, summary (national, other academic conference)   Publisher:Biomedical Fuzzy Systems Association  

  • A Study on the Possibility of Distinguishing between Parkinson's disease and Essential Tremor using Motor Symptoms Observed by an RGB camera

    Proceedings of the 35th Annual Conference of Biomedical Fuzzy Systems Association (BMFSA2022)   2022.12

     More details

    Authorship:Corresponding author   Language:Japanese   Publishing type:Research paper, summary (national, other academic conference)   Publisher:Biomedical Fuzzy Systems Association  

  • Tracking A Group of Black Cows Using SORT based Tracking Algorithm

    Cho Cho Aye, Thi Thi Zin, M. Aikawa, I. Kobayashi

    第 35 回バイオメディカル・ファジィ・システム学会年次大会 講演論文集 (BMFSA2022)   2022.12

     More details

    Authorship:Corresponding author   Language:English   Publishing type:Research paper, summary (national, other academic conference)   Publisher:バイオメディカル・ファジィ・システム学会  

  • Artificial Intelligence Topping on Spectral Analysis for Lameness Detection in Dairy Cattle

    Thi Thi Zin, Ye Htet, San Chain Tun and Pyke Tin

    第 35 回バイオメディカル・ファジィ・システム学会年次大会 講演論文集 (BMFSA2022)   2022.12

     More details

    Authorship:Lead author, Corresponding author   Language:English   Publishing type:Research paper, summary (national, other academic conference)   Publisher:バイオメディカル・ファジィ・システム学会  

display all >>

Presentations 【 display / non-display

  • Advancing Neonatal Monitoring Using Heart Rate Variability with Machine Learning Models International conference

    Tunn Cho Lwin, Thi Thi Zin, Pyke Tin, E. Kino and T. Ikenoue

    The Seventh International Conference on Smart Vehicular Technology, Transportation, Communication and Applications (VTCA 2025)  (Fuzhou, Fujian, China)  2025.11.22  Technically sponsored by Southwest Jiaotong University and Technology and Nanchang Institute of Technology

     More details

    Event date: 2025.11.21 - 2025.11.23

    Language:English   Presentation type:Oral presentation (general)  

    Venue:Fuzhou, Fujian, China   Country:China  

    Accurate assessment of neonatal respiratory status is critical for early intervention and improved clinical outcomes. Umbilical cord blood pressure of carbon dioxide (PCO2) is a key marker of respiratory efficiency, but its measurement requires invasive sampling. This study proposes a non-invasive, machine learning–based framework to predict abnormal PCO2 levels using fetal heart rate variability (FHRV) features. Seven HRV features were initially extracted, and Principal Component Analysis identified M, S, and entropy as the most informative for classification. Patients were divided into normal (G2) and abnormal (G1) groups based on a PCO2 threshold of 35 mmHg. To address class imbalance, oversampling was applied to the training dataset. Classification experiments with SVM (linear and Gaussian) and k-nearest neighbor (kNN) classifiers demonstrated that oversampling improved sensitivity for the minority abnormal group while maintaining high precision for the majority normal group. On the testing dataset, kNN achieved the most balanced performance, with 85% precision and 83% recall for abnormal cases. These results highlight the potential of combining HRV analysis with ma-chine learning to provide continuous, non-invasive, and real-time monitoring of neonatal respiratory status, offering a promising tool to guide clinical decision-making and reduce dependence on invasive procedures.

  • Digital Cattle Twins: Revolutionizing Calving Management Through Markovian Prediction Systems International conference

    Thi Thi Zin, Tunn Cho Lwin, Aung Si Thu Moe, Pyae Phyo Kyaw, M. Aikawa and Pyke Tin

    The Seventh International Conference on Smart Vehicular Technology, Transportation, Communication and Applications (VTCA 2025)  (Fuzhou, Fujian, China)  2025.11.22  Technically sponsored by Southwest Jiaotong University and Technology and Nanchang Institute of Technology

     More details

    Event date: 2025.11.21 - 2025.11.23

    Language:English   Presentation type:Oral presentation (general)  

    Venue:Fuzhou, Fujian, China   Country:China  

    The integration of digital twin technology with livestock management intro-duces new possibilities in precision livestock farming. Our research proposes the Digital Cattle Twin (DCT) system, a transformative approach to managing cattle calving during the critical periparturient period. This system merges Markovian modeling with real-time visual monitoring to enhance predictive accuracy in calving management. By modeling calving as a sequence of interconnected states within a Markov chain, the DCT predicts progression from early labor to postpartum recovery with high precision. Real-time prob-ability calculations enable early detection of complications and optimal intervention timing. The system integrates diverse data streams, including vaginal temperature sensors for pre-calving temperature drops, AI-based video analysis for behavioral and movement changes, heart rate variability for stress detection, and spatial tracking for calving readiness. A predictive analytics engine processes this multimodal data, achieving high accuracy in detecting risks. The DCT’s adaptive learning architecture refines predictions using both individual and herd-level patterns, enabling a proactive rather than reactive management approach. Beyond calving, this framework illustrates how mathematical modeling and digital twins can redefine livestock management, opening pathways for broader applications in animal health, welfare, and production optimization.

  • AI and Image Processing for Smart Ecosystems: Enabling Connected Futures Across Transportation, Agriculture, and Healthcare Invited International conference

    Thi Thi Zin

    The Seventh International Conference on Smart Vehicular Technology, Transportation, Communication and Applications (VTCA 2025)  (Fuzhou, Fujian, China)  2025.11.22  Technically sponsored by Southwest Jiaotong University and Technology and Nanchang Institute of Technology

     More details

    Event date: 2025.11.21 - 2025.11.23

    Language:English   Presentation type:Oral presentation (keynote)  

    Venue:Fuzhou, Fujian, China   Country:China  

    The convergence of Artificial Intelligence (AI) and cutting-edge Image Processing is ushering in a transformative era for smart ecosystems, delivering unparalleled precision and efficiency across diverse sectors. This keynote explores the evolution and synergistic integration of AI technologies, tracing their impact from foundational applications in Intelligent Transportation Systems (ITS) to advanced solutions in precision agriculture and health monitoring.
    Initially, AI’s role in ITS revolutionized traffic safety and management through automated detection of road signs, pedestrians, and environmental cues. Building on this legacy, we delve into how real-time AI-driven image analytics are now empowering smart dairy farming and comprehensive livestock health monitoring. These applications facilitate predictive health management, optimize operational workflows, and foster sustainable, intelligent farm ecosystems.
    Furthermore, this presentation highlights crucial interdisciplinary collaborations in healthcare. We examine AI-enabled monitoring solutions tailored for elderly care and infant health, which leverage sensor fusion and intelligent data interpretation. These systems are pivotal in enhancing patient safety, improving quality of life, and enabling personalized care delivery.
    Ultimately, this keynote underscores the profound potential of AI-driven innovations to create interconnected, intelligent environments. By seamlessly bridging smart transportation, precision agriculture, and human healthcare, we present a holistic and actionable vision for the development of future smart communities.

    Other Link: https://vtca2025.udd.ink/page/keynoteSpeech.html

  • An End-to-End Computer Vision Pipeline for Cow Ear Tag Number Recognition Using YOLOv11 and a Hybrid EfficientNet-NRTR Model International conference

    San Chain Tun, Pyke Tin, M. Aikawa, I. Kobayashi and Thi Thi Zin

    The 9th International Conference on Information Technology (InCIT2025)  (Phuket, Thailand)  2025.11.13  IEEE Thailand Section (IEEE Computer Society Thailand Chapter)

     More details

    Event date: 2025.11.12 - 2025.11.14

    Language:English   Presentation type:Oral presentation (general)  

    Venue:Phuket, Thailand   Country:Thailand  

    Automated identification of individual livestock is a critical component of precision livestock farming. This study presents a robust, real-time system for recognizing four-digit ear tag numbers on cows using a multi-stage pipeline. The pipeline consists of ROI extraction, YOLOv11-based detection and instance segmentation of cow heads and ear tags, a customized tracking algorithm for persistent identity assignment, and an NRTR-based OCR model with EfficientNet backbones for number recognition. The customized tracker leverages Intersection over Union (IoU), frame-holding, and bounding box position logic to handle missed detections and ensure accurate tracking. The OCR model predicts digits 0-9 and uses "x" for unknown characters, providing reliable sequence recognition from cropped ear tag images. The system was evaluated on a real-world dataset collected over five days on a dairy farm. The overall detection and tracking accuracy achieved 96.18%, while OCR accuracy for EfficientNet backbones B4 to B7 reached 91.54%, 93.85%, 93.08%, and 95.38%, respectively. Results demonstrate high accuracy and robustness across all stages, confirming the practical viability of the approach. This integrated system offers a scalable solution for automated cattle identification and monitoring in operational farm environments.

  • Deep Sequential Gait Feature Learning for Long-Term Person Re-Identification in Real-World Environments International conference

    Cho Nilar Phyo, Thi Thi Zin and Pyke Tin

    The 9th International Conference on Information Technology (InCIT2025)  (Phuket, Thailand and Online)  2025.11.13  IEEE Thailand Section (IEEE Computer Society Thailand Chapter)

     More details

    Event date: 2025.11.12 - 2025.11.14

    Language:English   Presentation type:Oral presentation (general)  

    Venue:Phuket, Thailand and Online   Country:Thailand  

    This paper presents a novel gait-based framework for long-term person re-identification in real-world environments. Unlike appearance-based methods, which are often sensitive to illumination, clothing changes, and occlusion, our approach leverages gait dynamics captured via dense optical flow and deep feature learning. We integrate ResNet101 for spatial feature extraction and an LSTM network for temporal sequence modeling, enabling robust representation of human walking patterns across extended time periods. The experimental results on gait datasets demonstrate that the proposed system achieves good recognition in term of accuracy, mean Average Precision (mAP) and recall, and stability under challenging real-world conditions, highlighting its potential for surveillance and security applications.

display all >>

Awards 【 display / non-display

  • IEEE GCCE 2025 Excellent Student Paper Award (Outstanding Prize)

    2025.9   2025 IEEE 14th Global Conference on Consumer Electronics (GCCE 2025)   A Conceptual Framework for Neonatal Motor Activity Monitoring Using Digital Twin Technology and Computer Vision: A Preliminary Study

    Remon Nakashima, Thi Thi Zin and Yuki Kodama

     More details

    Award type:Award from international society, conference, symposium, etc.  Country:Japan

    Abstract—Continuous non‑contact monitoring of neonatal motor activity in the neonatal intensive care unit (NICU) is crucial for early detection of neurological disorders and for guiding timely clinical interventions. We introduce an infrared‑driven skeleton‑estimation prototype designed for real‑time operation that generates a live virtual “digital twin” of the infant’s posture to support clinician assessment. A deep‑learning pose model was fine‑tuned on a bespoke infrared key‑point dataset, and three motion‑quantification filters were evaluated: raw differencing (Method A), center‑aligned suppression (Method B), and a newly proposed skeleton template‑matching filter (Method C). Tests on a life‑sized neonatal mannequin confirmed centimetric joint‑localization accuracy, reliable detection of 50‑pixel hand displacements, and reduction of simulated camera‑shake artifacts to within five pixels. Building on these results, a follow‑up evaluation on pre‑term neonates showed that Method C suppressed static key‑point noise by 78 % while preserving physiological motion. This combined mannequin and in‑vivo evidence demonstrates the clinical feasibility of our infrared digital‑twin framework and establishes a foundation for automated assessment of pre‑term motor development.

  • ORAL PRESENTATION AWARD

    2025.9   2025 IEEE 14th Global Conference on Consumer Electronics (GCCE 2025)   A Conceptual Framework for Neonatal Motor Activity Monitoring Using Digital Twin Technology and Computer Vision: A Preliminary Study

    Remon Nakashima, Thi Thi Zin and Yuki Kodama

     More details

    Award type:Award from international society, conference, symposium, etc.  Country:Japan

    Abstract—Continuous non‑contact monitoring of neonatal motor activity in the neonatal intensive care unit (NICU) is crucial for early detection of neurological disorders and for guiding timely clinical interventions. We introduce an infrared‑driven skeleton‑estimation prototype designed for real‑time operation that generates a live virtual “digital twin” of the infant’s posture to support clinician assessment. A deep‑learning pose model was fine‑tuned on a bespoke infrared key‑point dataset, and three motion‑quantification filters were evaluated: raw differencing (Method A), center‑aligned suppression (Method B), and a newly proposed skeleton template‑matching filter (Method C). Tests on a life‑sized neonatal mannequin confirmed centimetric joint‑localization accuracy, reliable detection of 50‑pixel hand displacements, and reduction of simulated camera‑shake artifacts to within five pixels. Building on these results, a follow‑up evaluation on pre‑term neonates showed that Method C suppressed static key‑point noise by 78 % while preserving physiological motion. This combined mannequin and in‑vivo evidence demonstrates the clinical feasibility of our infrared digital‑twin framework and establishes a foundation for automated assessment of pre‑term motor development.

  • Best Presentation Award

    2025.8   The 19th International Conference on Innovative Computing, Information and Control (ICICIC2025)   Depth Camera-Based Analysis of Elderly Behavior for Risk Detection Using Skeletal Data

    Remon Nakashima, Thi Thi Zin, H. Tamura, S. Watanabe

     More details

    Award type:Award from international society, conference, symposium, etc.  Country:Japan

    We present a non-contact, privacy-preserving monitoring system that estimates behavioral risk in elderly-care rooms using depth cameras. First, each video frame is processed to detect individuals and extract 13 skeletal keypoints via a YOLO-based person detector and pose estimator. These keypoints are fed into a two-stage model comprising a graph convolutional network (GCN) and a Transformer encoder, which capture spatial and temporal movement patterns. To contextualize actions, we apply semantic segmentation to identify key regions such as beds and chairs. A rule-based framework then integrates action predictions with spatial overlap between keypoints and environment masks to assign one of three risk levels: Safe, Attention, or Danger. For robustness, we apply temporal smoothing and fuse outputs from two depth cameras. Finally, we design and implement a lightweight graphical user interface (GUI) to visualize risk levels and issue real-time alerts. Experimental results show an overall accuracy of 89.8 % and a hazard-detection accuracy of 74.3 %.

  • Silver Award of the Best Oral Presentation

    2025.8   The 2nd International Conference on Agricultural Innovation and Natural Resources   Vision-Driven Detection of Aquatic Animals for Precision Nutritional Control

    Aung Si Thu Moe, Kittichon U-TAYNAPUN, Nion CHIRAPONGSATONKUL, Pyke Tin, Thi Thi Zin

     More details

    Award type:Award from international society, conference, symposium, etc.  Country:Thailand

    Aquatic farming is a vital component of Thailand’s agricultural economy, but it faces ongoing challenges in managing aquatic animal nutrition and determining accurate feed requirements. Traditional feeding methods often lead to overfeeding or underfeeding, increasing operational costs and raising environmental concerns. This study introduces a vision-driven approach to enhance precision nutrition management in controlled pond environments. We evaluate the feed preferences of aquatic animals across four types of feed: PSB Saiyai Green, PSB Saiyai Brown, Control, and PSB Saiyai Dark Red using advanced computer vision techniques. A small-scale experimental pond was constructed. A top-mounted camera captures real-time footage across four designated feed regions. Light bulbs ensure consistent illumination for clear visibility. Our system leverages a custom lightweight distillation framework based on the YOLOv11x model to detect and count aquatic animals in each region efficiently and accurately. The analysis delivers actionable insights into feeding behavior and preferences, enabling data-driven, optimized feeding strategies. This method supports the development of smart aquaculture practices, promoting sustainability and improved nutritional management in Thailand's aquatic farming industry.

  • Best Presentation Award

    2025.7   The 12th IEEE International Conference on Consumer Electronics – Taiwan   A study on action recognition for the elderly using depth camera

    Remon Nakashima, Thi Thi Zin, H. Tamura, S. Watanabe, E. Chosa

     More details

    Award type:Award from international society, conference, symposium, etc.  Country:Taiwan, Province of China

    In this study, a depth camera-based system is proposed to achieve non-contact, privacy- preserving action recognition using human skeleton recognition. Specifically, human regions are first extracted using bounding box (BB) detection, followed by action recognition based on Keypoint-based pose estimation. The estimated Keypoints capture detailed joint positions, and their structural relationships are modeled with a Graph Convolutional Network (GCN). Furthermore, a Transformer is employed to capture the temporal features of the skeletal data. This Keypoint-centric pipeline differentiates our approach from conventional, silhouette-level methods and significantly enhances the granularity of action recognition.

display all >>

Grant-in-Aid for Scientific Research 【 display / non-display

  • AIと画像データ解析を活用した牛の摂食行動モニタリングによる持続可能な酪農の実現

    Grant number:25K15158  2025.04 - 2028.03

    独立行政法人日本学術振興会  科学研究費基金  基盤研究(C)(一般)

      More details

    Authorship:Principal investigator 

     畜産は全国農業総生産額の3 割以上を占める重要な産業であるが、不適切な家畜管理による生産性の低下が大きな問題となっている。その主たる原因は飼養形態の変化による1 頭あたり観察時間の短縮であり、飼養頭数の多頭化・農家の高齢化が進む畜産現場において、365 日24 時間にわたり家畜の異常や変化を観察し続けることは困難である。
     申請者らは、主に非接触・非侵襲センサ情報のアルゴリズム解析技術に着目し、距離画像とビデオ画像を用いて牛の発情を検知できる独自アルゴリズムの開発に取り組んできた。本研究では、これらの技術を応用することで、牛の発情や分娩監視時の異常を自動検知できる省力的な24 時間
    家畜管理システムを開発する。

  • Enhanced AI-Driven Image Analysis for Early Mycoplasma Detection in Dairy Calves for innovations in Livestock Health Management

    Grant number:25K15232  2025.04 - 2028.03

    独立行政法人日本学術振興会  科学研究費基金  基盤研究(C)(一般)

      More details

    Authorship:Coinvestigator(s) 

     畜産は全国農業総生産額の3 割以上を占める重要な産業であるが、不適切な家畜管理による生産性の低下が大きな問題となっている。その主たる原因は飼養形態の変化による1 頭あたり観察時間の短縮であり、飼養頭数の多頭化・農家の高齢化が進む畜産現場において、365 日24 時間にわたり家畜の異常や変化を観察し続けることは困難である。
     申請者らは、主に非接触・非侵襲センサ情報のアルゴリズム解析技術に着目し、距離画像とビデオ画像を用いて牛の発情を検知できる独自アルゴリズムの開発に取り組んできた。本研究では、これらの技術を応用することで、牛の発情や分娩監視時の異常を自動検知できる省力的な24 時間
    家畜管理システムを開発する。

  • 牛の分娩監視システムに関する研究

    Grant number:18J14542  2018.04 - 2020.03

    科学研究費補助金  特別研究員奨励費

    須見 公祐、Thi Thi Zin(受入研究者)

      More details

    Authorship:Coinvestigator(s) 

    精度や耐久性が不十分な割に高価なウェラブル型センサの装着や、肉体的・精神的に大きな負担を強いられる目視によるカメラ映像のモニタリング等は、大規模化する畜産現場において現実的なコストで利用できるものが極めて少ない。そこで本研究では、監視カメラから得られる映像を用いて非接触型の分娩管理システムを開発することで、農家そして牛、両方の負担を減らすことを目的とする。
    本来、牛は牛群と呼ばれるグループで行動を行う。そして、分娩が間近になると分娩室という分娩専用の牛舎に移される。分娩室には2 頭以上を同時に入れるケースも多く、どの牛で分娩が始まったかを識別する必要があることから、個体識別と追跡処理が必要となる。次に、分娩行動の段階を追って検知を行う。抽出する特徴としては、尻尾が上がっているかどうか、牛が立っているか座っているか、落ち着きがなくなり移動量が増加するか、子牛を出産したかどうか、親牛が子牛を舐めているかどうかなど、それぞれの過程で自動的に異常を見つけ通報を行うアルゴリズムの開発を進める。分娩行動が起きたかどうかの判断は、これらのデータから各特徴の重要度(重み)を学習させることによって行う。そして、最終目標として難産など異常行動の検知を行うために事例を蓄積しながら知識ベースを充実させ、異常事態の検知を行い、分娩の各段階を監視して異常事態の検知ならびに通報が可能なシステムの開発を目指す。

  • 画像処理技術と非接触センサを用いた牛の発情検知及び分娩監視システムの開発

    Grant number:17K08066  2017.04 - 2021.03

    科学研究費補助金  基盤研究(C)

      More details

    Authorship:Principal investigator 

     畜産は全国農業総生産額の3 割以上を占める重要な産業であるが、不適切な家畜管理による生産性の低下が大きな問題となっている。その主たる原因は飼養形態の変化による1 頭あたり観察時間の短縮であり、飼養頭数の多頭化・農家の高齢化が進む畜産現場において、365 日24 時間にわたり家畜の異常や変化を観察し続けることは困難である。
     申請者らは、主に非接触・非侵襲センサ情報のアルゴリズム解析技術に着目し、距離画像とビデオ画像を用いて牛の発情を検知できる独自アルゴリズムの開発に取り組んできた。本研究では、これらの技術を応用することで、牛の発情や分娩監視時の異常を自動検知できる省力的な24 時間
    家畜管理システムを開発する。

  • Development of forensic imaging modality for person identification using integration method of feature correspondences between heterogeneous images

    Grant number:15K15457  2015.04 - 2018.03

    Grant-in-Aid for Scientific Research  Grant-in-Aid for challenging Exploratory Research

      More details

    Authorship:Coinvestigator(s) 

display all >>

Available Technology 【 display / non-display