受賞 - ティティズイン
-
IEEE GCCE 2025 Excellent Student Paper Award (Outstanding Prize)
2025年9月 2025 IEEE 14th Global Conference on Consumer Electronics (GCCE 2025) A Conceptual Framework for Neonatal Motor Activity Monitoring Using Digital Twin Technology and Computer Vision: A Preliminary Study
Remon Nakashima, Thi Thi Zin and Yuki Kodama
受賞区分:国際学会・会議・シンポジウム等の賞 受賞国:日本国
Abstract—Continuous non‑contact monitoring of neonatal motor activity in the neonatal intensive care unit (NICU) is crucial for early detection of neurological disorders and for guiding timely clinical interventions. We introduce an infrared‑driven skeleton‑estimation prototype designed for real‑time operation that generates a live virtual “digital twin” of the infant’s posture to support clinician assessment. A deep‑learning pose model was fine‑tuned on a bespoke infrared key‑point dataset, and three motion‑quantification filters were evaluated: raw differencing (Method A), center‑aligned suppression (Method B), and a newly proposed skeleton template‑matching filter (Method C). Tests on a life‑sized neonatal mannequin confirmed centimetric joint‑localization accuracy, reliable detection of 50‑pixel hand displacements, and reduction of simulated camera‑shake artifacts to within five pixels. Building on these results, a follow‑up evaluation on pre‑term neonates showed that Method C suppressed static key‑point noise by 78 % while preserving physiological motion. This combined mannequin and in‑vivo evidence demonstrates the clinical feasibility of our infrared digital‑twin framework and establishes a foundation for automated assessment of pre‑term motor development.
-
2025年9月 2025 IEEE 14th Global Conference on Consumer Electronics (GCCE 2025) A Conceptual Framework for Neonatal Motor Activity Monitoring Using Digital Twin Technology and Computer Vision: A Preliminary Study
Remon Nakashima, Thi Thi Zin and Yuki Kodama
受賞区分:国際学会・会議・シンポジウム等の賞 受賞国:日本国
Abstract—Continuous non‑contact monitoring of neonatal motor activity in the neonatal intensive care unit (NICU) is crucial for early detection of neurological disorders and for guiding timely clinical interventions. We introduce an infrared‑driven skeleton‑estimation prototype designed for real‑time operation that generates a live virtual “digital twin” of the infant’s posture to support clinician assessment. A deep‑learning pose model was fine‑tuned on a bespoke infrared key‑point dataset, and three motion‑quantification filters were evaluated: raw differencing (Method A), center‑aligned suppression (Method B), and a newly proposed skeleton template‑matching filter (Method C). Tests on a life‑sized neonatal mannequin confirmed centimetric joint‑localization accuracy, reliable detection of 50‑pixel hand displacements, and reduction of simulated camera‑shake artifacts to within five pixels. Building on these results, a follow‑up evaluation on pre‑term neonates showed that Method C suppressed static key‑point noise by 78 % while preserving physiological motion. This combined mannequin and in‑vivo evidence demonstrates the clinical feasibility of our infrared digital‑twin framework and establishes a foundation for automated assessment of pre‑term motor development.
-
Best Presentation Award
2025年8月 The 19th International Conference on Innovative Computing, Information and Control (ICICIC2025) Depth Camera-Based Analysis of Elderly Behavior for Risk Detection Using Skeletal Data
Remon Nakashima, Thi Thi Zin, H. Tamura, S. Watanabe
受賞区分:国際学会・会議・シンポジウム等の賞 受賞国:日本国
We present a non-contact, privacy-preserving monitoring system that estimates behavioral risk in elderly-care rooms using depth cameras. First, each video frame is processed to detect individuals and extract 13 skeletal keypoints via a YOLO-based person detector and pose estimator. These keypoints are fed into a two-stage model comprising a graph convolutional network (GCN) and a Transformer encoder, which capture spatial and temporal movement patterns. To contextualize actions, we apply semantic segmentation to identify key regions such as beds and chairs. A rule-based framework then integrates action predictions with spatial overlap between keypoints and environment masks to assign one of three risk levels: Safe, Attention, or Danger. For robustness, we apply temporal smoothing and fuse outputs from two depth cameras. Finally, we design and implement a lightweight graphical user interface (GUI) to visualize risk levels and issue real-time alerts. Experimental results show an overall accuracy of 89.8 % and a hazard-detection accuracy of 74.3 %.
-
Silver Award of the Best Oral Presentation
2025年8月 The 2nd International Conference on Agricultural Innovation and Natural Resources Vision-Driven Detection of Aquatic Animals for Precision Nutritional Control
Aung Si Thu Moe, Kittichon U-TAYNAPUN, Nion CHIRAPONGSATONKUL, Pyke Tin, Thi Thi Zin
受賞区分:国際学会・会議・シンポジウム等の賞 受賞国:タイ王国
Aquatic farming is a vital component of Thailand’s agricultural economy, but it faces ongoing challenges in managing aquatic animal nutrition and determining accurate feed requirements. Traditional feeding methods often lead to overfeeding or underfeeding, increasing operational costs and raising environmental concerns. This study introduces a vision-driven approach to enhance precision nutrition management in controlled pond environments. We evaluate the feed preferences of aquatic animals across four types of feed: PSB Saiyai Green, PSB Saiyai Brown, Control, and PSB Saiyai Dark Red using advanced computer vision techniques. A small-scale experimental pond was constructed. A top-mounted camera captures real-time footage across four designated feed regions. Light bulbs ensure consistent illumination for clear visibility. Our system leverages a custom lightweight distillation framework based on the YOLOv11x model to detect and count aquatic animals in each region efficiently and accurately. The analysis delivers actionable insights into feeding behavior and preferences, enabling data-driven, optimized feeding strategies. This method supports the development of smart aquaculture practices, promoting sustainability and improved nutritional management in Thailand's aquatic farming industry.
-
Best Presentation Award
2025年7月 The 12th IEEE International Conference on Consumer Electronics – Taiwan A study on action recognition for the elderly using depth camera
Remon Nakashima, Thi Thi Zin, H. Tamura, S. Watanabe, E. Chosa
受賞区分:国際学会・会議・シンポジウム等の賞 受賞国:台湾
In this study, a depth camera-based system is proposed to achieve non-contact, privacy- preserving action recognition using human skeleton recognition. Specifically, human regions are first extracted using bounding box (BB) detection, followed by action recognition based on Keypoint-based pose estimation. The estimated Keypoints capture detailed joint positions, and their structural relationships are modeled with a Graph Convolutional Network (GCN). Furthermore, a Transformer is employed to capture the temporal features of the skeletal data. This Keypoint-centric pipeline differentiates our approach from conventional, silhouette-level methods and significantly enhances the granularity of action recognition.
-
Best Presentation Award
2025年4月 2025 10th International Conference on Multimedia and Image Processing (ICMIP 2025) Machine learning-based prediction of cattle body condition score using 3D point cloud surface features
Pyae Phyo Kyaw, Thi Thi Zin, Pyke Tin, M. Aikawa, I. Kobayashi
受賞区分:国際学会・会議・シンポジウム等の賞 受賞国:日本国
Body Condition Score (BCS) of dairy cattle is a crucial indicator of their health, productivity, and reproductive performance throughout the production cycle. Recent advancements in computer vision techniques has led to the development of automated BCS prediction systems. This paper proposes a BCS prediction system that leverages 3D point cloud surface features to enhance accuracy and reliability. Depth images are captured from a top-view perspective and processed using a hybrid depth image detection model to extract the cattle’s back surface region. The extracted depth data is converted into point cloud data, from which various surface features are analyzed, including normal vectors, curvature, point density, and surface shape characteristics (planarity, linearity, and sphericity). Additionally, Fast Point Feature Histograms (FPFH), triangle mesh area, and convex hull area are extracted and evaluated using three optimized machine learning models: Random Forest (RF), K-Nearest Neighbors (KNN), and Gradient Boosting (GB). Model performance is assessed using different tolerance levels and error metrics, including Mean Absolute Error (MAE) and Mean Absolute Percentage Error (MAPE). Among the models, Random Forest demonstrates the highest performance, achieving accuracy rates of 51.36%, 86.21%, and 97.83% at 0, 0.25, and 0.5 tolerance levels, respectively, with an MAE of 0.161 and MAPE of 5.08%. This approach enhances the precision of BCS estimation, offering a more reliable and automated solution for dairy cattle monitoring and health management.
-
Best Presentation Award
2025年4月 2025 10th International Conference on Multimedia and Image Processing (ICMIP 2025) Minimizing Resource Usage for Real-Time Network Camera Tracking of Black Cows
Aung Si Thu Moe, Thi Thi Zin, Pyke Tin, M. Aikawa, I. Kobayashi
受賞区分:国際学会・会議・シンポジウム等の賞 受賞国:日本国
Livestock plays a crucial role in the farming industry to meet consumer demand. A livestock monitoring system helps track animal health while reducing labor requirements. Most livestock farms are small, family-owned operations. This study proposes a real-time black cow detection and tracking system using network cameras in memory and disk constrained environments. We employ the Detectron2 Mask R-CNN ResNeXt-101 model for black cow region detection and the ByteTrack algorithm for tracking. ByteTrack tracks multiple objects by associating each detection box. Unlike other deep learning tracking algorithms that use multiple features such as texture, color, shape, and size. ByteTrack effectively reduces tracking ID errors and ID switches. Detecting and tracking black cows in real-time is challenging due to their uniform color and similar sizes. To optimize performance on low-specification machines, we apply ONNX (Open Neural Network Exchange) to the Detectron2 detection model for optimization and quantization. The system processes input images from network cameras, enhances color during preprocessing, and detects and tracks black cows efficiently. Our system achieves 95.97% mAP@0.75 detection accuracy and 97.16 % in daytime video and 94.83 % in nighttime accuracy of tracking are effectively tracks individual black cows, minimizing duplicate IDs and improving tracking after missed detections or occlusions. The system is designed to operate on machines with minimal hardware requirements.
-
2024年12月 SOFT九州支部【学会名】第26回日本知能情報ファジィ学会九州支部学術講演会 マハラノビス距離を用いた胎児心拍変動の定量的評価とpH分類
Tunn Cho Lwin, Thi Thi Zin, Pyke Tin, 紀 愛美, 池ノ上 克
受賞区分:国内学会・会議・シンポジウム等の賞 受賞国:日本国
-
2024年12月 SOFT九州支部【学会名】第26回日本知能情報ファジィ学会九州支部学術講演会 軽量なPointNet++モデルを用いたカラー点群に基づく牛識別システム
Pyae Phyo Kyaw, Thi Thi Zin, Pyke Tin, 相川 勝, 小林 郁雄
受賞区分:国内学会・会議・シンポジウム等の賞 受賞国:日本国
-
Best Presentation Award
2024年9月 18th International Conference on Innovative Computing, Information and Control (ICICIC2024) Integrating Entropy Measures of Fetal Heart Rate Variability with Digital Twin Technology to Enhance Fetal Monitoring
Tunn Cho Lwin, Thi Thi Zin, Pyae Phyo Kyaw, Pyke Tin, E. Kino and T. Ikenoue
受賞区分:国際学会・会議・シンポジウム等の賞 受賞国:中華人民共和国
-
Best Paper Award
2024年9月 6TH IEEE MASS WORKSHOP ON SMART LIVING WITH IOT, CLOUD, AND EDGE COMPUTING ( COLOCATED WITH IEEE MASS 2024) Analyzing Parameter Patterns in YOLOv5-based Elderly Person Detection Across Variations of Data
Ye Htet, Thi Thi Zin, Pyke Tin, H. Tamura, K. Kondo, S. Watanabe, E. Chosa
受賞区分:国際学会・会議・シンポジウム等の賞
-
Best Paper Award
2024年8月 The 16th International Conference on Genetic and Evolutionary Computing (ICGEC-2024) Cattle Lameness Detection Using Leg Region Keypoints from a Single RGB Camera
Bo Bo Myint, Thi Thi Zin, M. Aikawa, I. Kobayashi and Pyke Tin
受賞区分:国際学会・会議・シンポジウム等の賞 受賞国:日本国
-
Best Paper Award
2024年8月 The 16th International Conference on Genetic and Evolutionary Computing (ICGEC-2024) Utilizing Behavioral Features for Predicting Calving Time
Wai Hnin Eaindrar Mg, Pyke Tin, M. Aikawa, I. Kobayashi, Y. Horii, K. Honkawa, Thi Thi Zin
受賞区分:国際学会・会議・シンポジウム等の賞 受賞国:日本国
-
Best Paper Award
2024年8月 The 16th International Conference on Genetic and Evolutionary Computing (ICGEC-2024) Applying Digital Restoration Techniques in Preservation of Ancient Murals using DiffusionBased Inpainting
Khant Khant Win Tint, Mie Mie Tin, Thi Thi Zin and Pyke Tin
受賞区分:国際学会・会議・シンポジウム等の賞 受賞国:日本国
-
Best Paper Award
2024年8月 The 16th International Conference on Genetic and Evolutionary Computing (ICGEC-2024) Cattle Lameness Classification Using Cattle Back Depth Information
San Chain Tun, Pyke Tin, M. Aikawa, I. Kobayashi and Thi Thi Zin
受賞区分:国際学会・会議・シンポジウム等の賞 受賞国:日本国
-
Best Paper Award
2024年8月 The 16th International Conference on Genetic and Evolutionary Computing (ICGEC-2024) Identification of Rumination Patterns in Cattle Through Optical Flow Analysis and Machine Learning Techniques
T. Ishikawa, Thi Thi Zin, M. Aikawa, I. Kobayashi
受賞区分:国際学会・会議・シンポジウム等の賞 受賞国:日本国
-
Best Paper Award
2024年8月 The 16th International Conference on Genetic and Evolutionary Computing (ICGEC-2024) From Vision to Vocabulary: A Multimodal Approach to Detect and Track Black CAttle Behaviors
Su Myat Noe, Thi Thi Zin, Pyke Tin and I. Kobayashi
受賞区分:国際学会・会議・シンポジウム等の賞 受賞国:日本国
-
Student paper Award
2024年3月 2024 RISP International Workshop on Nonlinear Circuits, Communications and Signal Processing Enhancing Precision Agriculture: Innovative Tracking Solutions for Black Cattle Monitoring
Su Myat Noe, Thi Thi Zin, Pyke Tin, and Ikuo Kobayashi
受賞区分:国際学会・会議・シンポジウム等の賞
-
Student paper Award
2024年3月 2024 RISP International Workshop on Nonlinear Circuits, Communications and Signal Processing Kalman Velocity-based Multi-Stage Classification Approach for Recognizing Black Cow Actions
Cho Cho Aye, Thi Thi Zin, M. Aikawa, I. Kobayashi
受賞区分:国際学会・会議・シンポジウム等の賞
-
Best Paper Award
2023年11月 The 9th International Conference on Science and Technology (ICST UGM 2023) An Innovative Framework for Cattle Activity Monitoring: Combining AI-Based Markov Chain Model with IoT Devices
Y. Hashimoto, Thi Thi Zin, Pyke Tin, I. Kobayashi and H. Hama
受賞区分:国際学会・会議・シンポジウム等の賞