MUKUNOKI Masayuki

写真a

Affiliation

Engineering educational research section Information and Communication Technology Program 

Title

Professor

External Link

Related SDGs


Degree 【 display / non-display

  • Doctor(Enginnering) ( 1999.7   Kyoto University )

Research Areas 【 display / non-display

  • Informatics / Perceptual information processing

  • Informatics / Intelligent informatics

 

Papers 【 display / non-display

  • Comparison of deep-learning-based face recognition characteristics across multiple animal species Reviewed

    Morimo S., Nagatomo Y., Mukunoki M.

    Proceedings of SPIE the International Society for Optical Engineering   14072   2026.2

     More details

    Authorship:Last author   Language:English   Publishing type:Research paper (international conference proceedings)   Publisher:Proceedings of SPIE the International Society for Optical Engineering  

    Deep learning has achieved strong performance in human face recognition, but animal identification remains challenging due to limited data and large inter-species variation. This study evaluates deep face recognition across five species using ArcFace and transfer learning. Experiments show that transferring from large-scale datasets, such as humans and cattle, significantly improves accuracy for small-scale species, raising chimpanzees, goats, and horses accuracy to 0.85, 0.94, and 0.93, respectively. Grad-CAM analysis further indicates that transfer learning stabilizes attention on meaningful facial regions. These results highlight transfer learning as an effective solution for robust animal face recognition under data-scarce conditions.

    DOI: 10.1117/12.3101902

    Scopus

  • An Attempt to Solve Fill in the Missing Letters CAPTCHA Using Generative AI Reviewed

    Yamaba H., Usuzaki S., Aburada K., Mukunoki M., Park M., Okazaki N.

    Lecture Notes in Electrical Engineering   1322 LNEE   385 - 395   2025.2

     More details

    Language:English   Publishing type:Research paper (international conference proceedings)   Publisher:Lecture Notes in Electrical Engineering  

    This paper reports an attempt to solve the fill in the missing letters type CAPTCHA using generative AI. Many websites have adopted CAPTCHA to prevent bots and other automated programs from engaging in malicious activities such as posting comment spam. Text-based CAPTCHA is the most common and earliest form of CAPTCHA. However, as optical character recognition (OCR) technology has improved, the intensity of distortions applied to a CAPTCHA to keep it unrecognizable by OCR has also increased. This has reached a point where humans are having difficulty recognizing CAPTCHA text. The CAPTCHA proposed in the previous study asks users to spell a word by filling in some blanks. Since the number of letters displayed is minimal, it is challenging to identify the correct word. However, one or more images that can serve as hints to help users guess the answer word are also provided. It is expected that the ability to guess can distinguish between humans and computers. However, it is conceivable that generative AI, which has been advancing in recent years, can substitute for this ability. A series of experiments was carried out to evaluated the performance of the generative AI’s ability to solve the proposed CAPTCHA. First, we examined whether a well-known image recognition system could accurately identify the images used in the CAPTCHA problems. Next, we used the recognition results to have the generative AI solve the CAPTCHA problems and determined the accuracy rate. Additionally, we evaluated the performance of the generative AI itself by solving the problems using the correct identification of each image. From the experimental results, it was found that the CAPTCHA is relatively robust against attack techniques using generative AI.

    DOI: 10.1007/978-981-96-1535-3_38

    Scopus

  • Proposal of Fill in the Missing Letters CAPTCHA Using Associations from Images Reviewed

    Yamaba H., Mustaza M.N.F.B., Usuzaki S., Aburada K., Mukunoki M., Park M., Okazaki N.

    Lecture Notes in Electrical Engineering   1114 LNEE   206 - 217   2024

     More details

    Language:English   Publishing type:Research paper (international conference proceedings)   Publisher:Lecture Notes in Electrical Engineering  

    This paper proposes a new fill in the missing letters type CAPTCHA using associations from images. Many web sites have adopted CAPTCHA to prevent bots and other automated programs from malicious activities such as posting comment spam. Text-based CAPTCHA is the most common and earliest CAPTCHA. But as optical character recognition (OCR) technology has improved, the intensity of distortions that must be applied to a CAPTCHA for it to remain unrecognizable by OCR has increased. This has reached a point where humans are having difficulty recognizing CAPTCHA text. The idea of the proposed CAPTCHA asks users to spell a word by filling some blanks. Since the number of shown letters are few, it is difficult to answer the correct word. But one or more images that can be used as hints to guess what is the answer word are also shown to the users. A series of experiments was carried out to evaluated the performance of the proposed CAPTCHA. First, a computer program was developed with various software languages for the usability evaluation. The system was used for the experiments to find the suitable parameters of the CAPTCHA such as numbers of letters that will be disclosed, position of disclosed letters. Next, security evaluation experiments were carried out using the system under the obtained parameters. The results of the experiments showed that the performance and limitation of the proposed CAPTCHA.

    DOI: 10.1007/978-981-99-9412-0_22

    Scopus

  • Development of a system to detect eye misalignment by using an Arm Cooperative Manipulators HMD equipped with eye-tracking capability Reviewed

    Takatsuka Kayoko, Nagatomo Yoki, Uchida Noriyuki, Ikeda Takuya, Mukunoki Masayuki, Okazaki Naonobu

    Journal of Robotics, Networking and Artificial Life   10 ( 1 )   17 - 24   2023.6

     More details

    Language:English   Publishing type:Research paper (scientific journal)   Publisher:ALife Robotics Corporation Ltd.  

    This study aimed to reduce the effect of the examining environment on accuracy by using an eye movement detection system, a VR head-mounted display. We reproduced the inspection environment in a virtual reality environment and performed the cover test, a basic inspection technique for tropia and phoria. We then developed a system that uses eye data collected by eye tracking to detect the directions and magnitudes of eye misalignment. The Maddox method, an existing testing procedure, was used to verify the accuracy. We have confirmed its effectiveness in detecting the directions and magnitudes of horizontal eye misalignment.

    DOI: 10.57417/jrnal.10.1_17

    Scopus

    CiNii Research

  • Detection of Eye Misalignment Using an HMD with an Eye-tracking Capability

    Nagatomo Yoki, Uchida Noriyuki, Ikeda Takuya, Takatsuka Kayoko, Mukunoki Masayuki, Okazaki Naonobu

    人工生命とロボットに関する国際会議予稿集   28   863 - 867   2023.2

     More details

    Language:English   Publishing type:Research paper (scientific journal)   Publisher:株式会社ALife Robotics  

    In this study, we implemented the Cover Test, a test method for diagnosing eye misalignment using a head-mounted display with an eye-tracking capability. Specifically, we created a virtual examination environment in a VR space. The eye-tracking technique collected eye movements immediately after the covering or uncovering of the eyes. Thus, we calculated the amount of eye deviation and developed a system to determine the presence and magnitude of strabismus and heterophoria.We assessed the system in the verification experiment by examining the consistency between the judgment results provided by this system and the clinical evaluation approach with the Maddox rod. The result was that we could verify the horizontal eye movements more accurately.

    DOI: 10.5954/icarob.2023.gs2-3

    CiNii Research

display all >>

MISC 【 display / non-display

  • 背景モデル構築のための画像類似性と時間的スパース性を考慮した画像選択法

    清水渚佐;川西康友;椋木雅之;美濃導彦

    情報処理学会研究報告(CD-ROM)   2012 ( 6 )   ROMBUNNO.CVIM-186,NO.12   2013.4

     More details

    Language:Japanese   Publishing type:Article, review, commentary, editorial, etc. (scientific journal)  

    J-GLOBAL

  • “ 画像からの物体検出と物体領域分割の統合手法”

    Jarich Vansteenberge,椋木雅之,美濃導彦

    ICT イノベーショ ン2013,2013-02   2013

     More details

    Language:English   Publishing type:Article, review, commentary, editorial, etc. (bulletin of university, research institution)  

  • 講義室での受講生の振る舞い観測と理解度推定の研究

    椋木雅之,美濃導彦

    人工知能学会全国大会(第26回), 1F2-OS-11-7, pp.1-4   2012.6

     More details

    Language:Japanese   Publishing type:Article, review, commentary, editorial, etc. (scientific journal)  

  • 写真撮影情報と観光スポットの相関分析

    笠原 秀一, 森 幹彦, 椋木 雅之, 美濃 導彦

    観光情報学会総会   2012.5

     More details

    Language:Japanese   Publishing type:Article, review, commentary, editorial, etc. (scientific journal)  

  • 京都大学における Sakai 実装の現状と課題

    梶田将司,元木環,椋木 雅之,平岡斉士

    情処研報 CLE   2012.5

     More details

    Language:Japanese   Publishing type:Article, review, commentary, editorial, etc. (scientific journal)  

display all >>

Presentations 【 display / non-display

  • Evaluation of 3D-SRCGAN Using the Large-Scale 3D Model Dataset ShapeNet

    Record of Joint Conference of Electrical and Electronics Engineers in Kyushu  2025.9.11  Joint Conference of Electrical, Electronics and Information Engineers in Kyushu

     More details

    Event date: 2025.9.11

    Language:Japanese   Presentation type:Oral presentation (general)  

    CiNii Research

  • 眼位異常判定システム構築の課題抽出 --遠見・近見の変位量差に基づく層別化の必要性

    髙塚佳代子、長友耀希、樋渡翔吾、山浦勇樹、臼崎翔太郎、中馬秀樹、椋木雅之

    化学工学会第90年会  2025.3.14 

     More details

    Event date: 2025.3.12 - 2025.3.14

    Language:Japanese   Presentation type:Oral presentation (general)  

  • 組み込み機器上で動作する深層学習を用いた低消費電力顔認証システムの検討

    森茂 蒼士, 椋木 雅之

    2024年度電子情報通信学会九州支部学生講演会  2024.9.25 

     More details

    Event date: 2024.9.25

    Language:Japanese   Presentation type:Oral presentation (general)  

  • 強化学習法におけるn-step returnの改良

    中根 勇樹, 片山 晋, 椋木 雅之

    火の国情報シンポジウム2024  2024.3.14 

     More details

    Event date: 2024.3.13 - 2024.3.14

    Language:Japanese   Presentation type:Oral presentation (general)  

  • 深層学習を用いた服飾品自動分類の コーディネート推薦における有効性の調査

    倉永 将宏, 椋木 雅之

    火の国情報シンポジウム2024  2024.3.13 

     More details

    Event date: 2024.3.13 - 2024.3.14

    Language:Japanese   Presentation type:Oral presentation (general)  

display all >>

Grant-in-Aid for Scientific Research 【 display / non-display

  • 深層学習による複数種類の動物に対する顔画像からの個体識別の特性比較

    Grant number:23K11151  2023.04 - 2026.03

    独立行政法人日本学術振興会  科学研究費基金  基盤研究(C)

      More details

    Authorship:Principal investigator 

  • 講義中の受講者の振る舞いと理解度の関係解析

    Grant number:25330407  2013.04 - 2016.03

    科学研究費補助金  基盤研究(C)

      More details

    Authorship:Principal investigator 

    本研究では,講義中の受講者の振る舞いから,その受講者が講義内容をどの程度理解できているかを推定することを目指す.そのために,「ビデオカメラ等で観測した受講者の振る舞いを計算機により自動的に分類する技術」「受講者に対して講義中に実施する講義の理解度アンケートや講義内容に関する小テストの結果から,受講者の理解度を客観的に推定する技術」「求めた客観的理解度を Ground Truth とみなして,分類した受講者の振る舞いとの関連を求めることで,受講者の振る舞いから客観的理解度を推定する技術」の3つの技術を開発する.

Available Technology 【 display / non-display