Journal of Applied Science and Engineering

Published by Tamkang University Press

1.30

Impact Factor

2.10

CiteScore

Chuanlei Dai1This email address is being protected from spambots. You need JavaScript enabled to view it., Yuqian Liu2, and Zhiwei Shi3

1School of Electronics and Electrical Engineering, Zhengzhou University of Science and Technology, Zhengzhou 450064 China

2School of Information Engineering, Zhengzhou University of Science and Technology, Zhengzhou 450064 China

3School of Business Administration, Zhengzhou University of Science and Technology, Zhengzhou 450064 China


 

 

Received: January 5, 2025
Accepted: March 6, 2025
Publication Date: March 25, 2025

 Copyright The Author(s). This is an open access article distributed under the terms of the Creative Commons Attribution License (CC BY 4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are cited.


Download Citation: ||https://doi.org/10.6180/jase.202511_28(11).0018  


Traditional mobile robot visual image hierarchical matching algorithms can only complete rough matching, which leads to low matching accuracy and long matching time. Therefore, this paper proposes a mobile robot visual image hierarchical matching algorithm based on deep reinforcement learning and orthogonal matching pursuit. Firstly, the strategy network and value network in the deep reinforcement learning network structure are used to guide the floating image to move to the reference image in the right direction. Secondly, the color feature rough matching is realized by designing the reward function in the process of rough matching. Finally, on the basis of rough matching, the orthogonal matching pursuit is used to extract local features of the images to be matched, and hierarchical matching of the mobile robot visual images is carried out according to the similarity. The experimental results show that the proposed algorithm can effectively realize the rough matching and fine matching of images, and has high stability in feature detection under different angles and scales, high matching accuracy and short time, and better image quality after matching, which improves the practical application effect of mobile robots.


Keywords: Deepreinforcement learning; Mobile robot; Visual images; orthogonal matching pursuit; Reward function


  1. [1] M.B.AlatiseandG.P.Hancke,(2020)“Areviewon challengesofautonomousmobilerobotandsensorfusion methods"IEEEaccess8:39830–39846. DOI:10.1109/ACCESS.2020.2975643.
  2. [2] K.ZhuandT.Zhang, (2021) “Deep reinforcement learningbasedmobilerobotnavigation:Areview"Ts inghuaScienceandTechnology26(5):674–691.DOI: 10.26599/TST.2021.9010012.
  3. [3] S. Yin, H. Li, A. A. Laghari, L. Teng, T. R. Gadekallu, and A. Almadhor, (2024) “FLSN-MVO: Edge Comput ing and Privacy Protection Based on Federated Learning Siamese Network With Multi-Verse Optimization Algorithm for Industry 5.0" IEEE Open Journal of the Communications Society: DOI: 10.1109/OJCOMS.2024.3520562.
  4. [4] C.HanandB.Li. “Mobile robot path planning based on improved A* algorithm”. In: 2023 IEEE 11th joint international information technology and artificial intelli gence conference (ITAIC). 11. IEEE. 2023, 672–676. DOI: 10.1109/ITAIC58329.2023.10408799.
  5. [5] X. Xiao, B. Liu, G. Warnell, and P. Stone, (2022) “Mo tion planning and control for mobile robot navigation us ing machine learning: a survey" Autonomous Robots 46(5): 569–597. DOI: 10.1007/s10514-022-10039-8.
  6. [6] A.Zeng,S. Song, K.-T. Yu, E. Donlon, F. R. Hogan, M. Bauza, D.Ma,O.Taylor,M.Liu,E.Romo,etal.,(2022) “Robotic pick-and-place of novel objects in clutter with multi-affordance grasping and cross-domain image match ing" TheInternational Journal of Robotics Research 41(7): 690–705. DOI: 10.1109/icra.2018.8461044.
  7. [7] O. Henaff. “Data-efficient image recognition with contrastive predictive coding”. In: International con ference on machine learning. PMLR. 2020, 4182–4192. DOI: 10.5555/3524938.3525329.
  8. [8] X. Wei, T. Zhang, Y. Li, Y. Zhang, and F. Wu. “Multi modality cross attention network for image and sen tence matching”. In: Proceedings of the IEEE/CVF con ference on computer vision and pattern recognition. 2020, 10941–10950. DOI: 10.1109/CVPR42600.2020.01095.
  9. [9] M.-Z. Poh, A. J. Battisti, L.-F. Cheng, J. Lin, A. Pat wardhan, G. S. Venkataraman, C. A. Athill, N. S. Pa tel, C. P. Patel, C. E. Machado, et al., (2023) “Vali dation of a deep learning algorithm for continuous, real time detection of atrial fibrillation using a wrist-worn device in an ambulatory environment" Journal of the American Heart Association 12(19): e030543. DOI: 10.1161/JAHA.123.030543.
  10. [10] H.-W. Chae, J.-H. Choi, and J.-B. Song, (2020) “Ro bust and autonomous stereo visual-inertial navigation for non-holonomic mobile robots" IEEE Transactions on Ve hicular Technology 69(9): 9613–9623. DOI: 10.1109/TVT.2020.3004163.
  11. [11] C. Wang, J. Ji, Z. Miao, and J. Zhou, (2021) “Synchro nization control for networked mobile robot systems based on Udwadia–Kalaba approach" Nonlinear Dynamics 105(1): 315–330. DOI: 10.1007/s11071-021-06487-z.
  12. [12] X. Wang, Y. Wang, X. Su, L. Wang, C. Lu, H. Peng, and J. Liu, (2024) “Deep reinforcement learning-based air combat maneuver decision-making: literature review, implementation tutorial and future direction" Artificial Intelligence Review 57(1): 1. DOI: 10.1007/s10462 023-10620-2.
  13. [13] S. Yin, H. Li, A. A. Laghari, T. R. Gadekallu, G. A. Sampedro, and A. Almadhor, (2024) “An Anomaly Detection Model Based on Deep Auto-Encoder and Cap sule Graph Convolution via Sparrow Search Algorithm in 6G Internet of Everything" IEEE Internet of Things Journal 11(18): 29402–29411. DOI: 10.1109/JIOT.2024.3353337.
  14. [14] Y.Jiang and S. Yin, (2023) “Heterogenous-view occluded expression data recognition based on cycle-consistent ad versarial network and K-SVD dictionary learning under intelligent cooperative robot environment" Computer Science and Information Systems 20(4): 1869–1883. DOI: 10.2298/CSIS221228034J.
  15. [15] Z. Wang, H. Yan, C. Wei, J. Wang, S. Bo, and M. Xiao. “Research on autonomous driving decision-making strategies based deep reinforcement learning”. In: Proceedings of the 2024 4th International Conference on Internet of Things and Machine Learning. 2024, 211–215. DOI: 10.1145/3697467.3697643.
  16. [16] A. S. M. Miah, M. A. M. Hasan, Y. Tomioka, and J. Shin, (2024) “Hand gesture recognition for multi-culture sign language using graph and general deep learning net work" IEEE Open Journal of the Computer Society 5: 144–155. DOI: 10.1109/OJCS.2024.3370971.
  17. [17] M.A.Ali, A. K. Sharma, and R. K. Dhanaraj, (2024) “Heterogeneous features and deep learning networks fusion based pest detection, prevention and controlling system using IoT and pest sound analytics in a vast agriculture system" Computers and Electrical Engineering 116: 109146. DOI: 10.1016/j.compeleceng.2024.109146.
  18. [18] B.Ivanyuk-Skulskiy, A. G. Kurbis, A. Mihailidis, and B. Laschowski. “Sequential image classification of human-robot walking environments using tempo ral neural networks”. In: 2024 10th IEEE RAS/EMBS International Conference for Biomedical Robotics and Biomechatronics (BioRob). IEEE. 2024, 49–54. DOI: 10.1109/BioRob60516.2024.10719798.
  19. [19] Q. Liu, G. Wang, Z. Liu, and H. Wang, (2024) “Visuo motor navigation for embodied robots with spatial memory and semantic reasoning cognition" IEEE Transactions on Neural Networks and Learning Systems: DOI: 10.1109/TNNLS.2024.3418857.
  20. [20] H. Zhao, Y. Guo, X. Li, Y. Liu, and J. Jin, (2025) “Hi erarchical control framework for path planning of mobile robots in dynamic environments through global guidance and reinforcement learning" IEEE Internet of Things Journal 12(1): DOI: 10.1109/JIOT.2024.3459918.
  21. [21] A. Chattopadhyay, R. Pilgrim, and R. Vidal, (2023) “Information maximization perspective of orthogonal matching pursuit with applications to explainable ai" Ad vances in Neural Information Processing Systems 36: 2956–2990.
  22. [22] L.Liu, R. Lin, and F. Zhang. “Vision and Laser-Based Mobile Robot Following and Mapping”. In: 2024 2nd International Conference on Mechatronics, Control and Robotics (ICMCR). IEEE. 2024, 38–45. DOI: 10.1109/ICMCR60777.2024.10481877.
  23. [23] S. Arshad and T.-H. Park, (2024) “SVS-VPR: A Se mantic Visual and Spatial Information-Based Hierarchical Visual Place Recognition for Autonomous Navigation in Challenging Environmental Conditions" Sensors 24(3): 906. DOI: 10.3390/s24030906.
  24. [24] Y. Raoui and M. Amraoui, (2024) “Simultaneous Lo calization and Mapping of a Mobile Robot With Stereo Camera Using ORB Features" Journal of Automation, Mobile Robotics and Intelligent Systems: 62–71. DOI: 10.14313/jamris/2-2024/14.


    



 

2.1
2023CiteScore
 
 
69th percentile
Powered by  Scopus

SCImago Journal & Country Rank

Enter your name and email below to receive latest published articles in Journal of Applied Science and Engineering.