Sudan Jha1, Sultan Ahmad This email address is being protected from spambots. You need JavaScript enabled to view it.2, Hikmat A. M. Abdeljaber2, A. A. Hamad3, and Malik Bader Alazzam4
1School of Sciences, Christ (Deemed to be University), NCR-New Delhi Campus, India 2Department of Computer Science, College of Computer Engineering and Sciences, Prince Sattam Bin Abdulaziz University, Alkharj, 11942, Saudi Arabia. 3College of Science, Tikrit University, Iraq 4Faculty of Computer Science and Informatics, Amman Arab University, Jordan
Received: June 27, 2021 Accepted: July 29, 2021 Publication Date: September 11, 2021
Copyright The Author(s). This is an open access article distributed under the terms of the Creative Commons Attribution License (CC BY 4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are cited.
Deep learning has paved the way for critical and revolutionary applications in almost every field of life in general. Ranging from engineering to healthcare, machine learning and deep learning has left its mark as the state-of-the-art technology application which holds the epitome of a reasonable high benchmarked solution. Incorporating neural network architectures into applications has become a common part of any software development process. In this paper, we perform a comparative analysis on the different transfer learning approaches in the domain of hand-written digit recognition. We use two performance measures, loss and accuracy. We later visualize the different results for the training and validation datasets and reach to a unison conclusion. This paper aims to target the drawbacks of the electronic whiteboard with simultaneous focus on the suitable model selection procedure for the digit recognition problem.
Keywords: Learning; Transfer Learning; Deep Learning; Machine Learning; Electronic Whiteboard
REFERENCES
[1] W. Hu, A. M. Gharuib, and A. Hafez, (2011) “Template match object detection for inertial navigation systems" Positioning 2(2): 78–83.
[2] E. Shechtman and M. Irani. “Matching local selfsimilarities across images and videos”. In: 2007 IEEE Conference on Computer Vision and Pattern Recognition. IEEE. 2007, 1–8. DOI: 10.1109/CVPR.2007.383198.
[3] S. Jha, C. Seo, E. Yang, and G. P. Joshi, (2021) “Real time object detection and trackingsystem for video surveillance system" Multimedia Tools and Applications 80(3): 3981–3996. DOI: 10.1007/s11042-020-09749-x.
[4] S. A. Medjahed, (2015) “A comparative study of feature extraction methods in images classification" International journal of image, graphics and signal processing 7(3): 16. DOI: DOI:10.5815/IJIGSP.2015.03.03.
[5] S. Jha, E. Yang, A. O. Almagrabi, A. K. Bashir, and G. P. Joshi, (2021) “Comparative analysis of time series model and machine testing systems for crime forecasting" Neural Computing and Applications 33(17): 10621–10636. DOI: 10.1007/s00521-020-04998-1.
[6] X. Y. Stella, R. Gross, and J. Shi. “Concurrent object recognition and segmentation by graph partitioning”. In: Neural Information Processing Systems. 1. Citeseer.2002.
[7] A. Suga, K. Fukuda, T. Takiguchi, and Y. Ariki. “Object recognition and segmentation using SIFT and Graph Cuts”. In: 2008 19th International Conference on Pattern Recognition. IEEE. 2008, 1–4.
[8] A. S. Mian, M. Bennamoun, and R. Owens, (2006) “Three-dimensional model-based object recognition and segmentation in cluttered scenes" IEEE transactions on pattern analysis and machine intelligence 28(10):1584–1601. DOI: 10.1109/TPAMI.2006.213.
[9] S. Ullman, (2007) “Object recognition and segmentation by a fragment-based hierarchy" Trends in cognitive sciences 11(2): 58–64. DOI: DOI:10.1016/j.tics.2006.11.009.
[10] J. B. Kim, H. S. Park, M. H. Park, and H. J. Kim. “Unsupervised moving object segmentation and recognition using clustering and a neural network”. In: Proceedings of the 2002 International Joint Conference on Neural Networks. IJCNN’02 (Cat. No. 02CH37290). 2. IEEE. 2002, 1240–1245. DOI: DOI:10.1109/IJCNN.2002.1007672.
[11] M. Lehtomäki, A. Jaakkola, J. Hyyppä, J. Lampinen, H. Kaartinen, A. Kukko, E. Puttonen, and H. Hyyppä, (2015) “Object classification and recognition from mobile laser scanning point clouds in a road environment" IEEE Transactions on Geoscience and Remote Sensing 54(2): 1226–1239. DOI: DOI: 10 . 1109/TGRS.2015.2476502.
[12] S. Jha, D. Prashar, and A. A. Elngar, (2020) “A novel approach using modified filtering algorithm (MFA) for the effective completion of cloud tasks" Journal of Intelligent & Fuzzy Systems (Preprint): 1–9. DOI: DOI:10.3233/jifs-189159.
[13] A. B. Sargano, X. Wang, P. Angelov, and Z. Habib. “Human action recognition using transfer learning with deep representations”. In: 2017 International joint conference on neural networks (IJCNN). IEEE. 2017, 463–469. DOI: 10.1109/IJCNN.2017.7965890.
[14] H. Azizpour, A. Sharif Razavian, J. Sullivan, A. Maki, and S. Carlsson. “From generic to specific deep representations for visual recognition”. In: Proceedings of the IEEE conference on computer vision and pattern recognition workshops. 2015, 36–45. DOI: 10.1109/CVPRW.2015.7301270.
[15] A. Sharif Razavian, H. Azizpour, J. Sullivan, and S. Carlsson. “CNN features off-the-shelf: an astounding baseline for recognition”. In: Proceedings of the IEEE conference on computer vision and pattern recognition workshops. 2014, 806–813. DOI: 10.1109/CVPRW.2014.131.
[16] S. Jha, L. Nkenyereye, G. P. Joshi, and E. Yang, (2020) “Mitigating and monitoring smart city using internet of things" Computers, Materials & Continua 65(2): 1059–1079. DOI: 10.32604/cmc.2020.011754.
[17] S. Parveen, S. Ahmad, and M. A. Khan, (2021) “Integration of Identity Governance and Management Framework within Universities for Privileged Users" Integration 12(6): DOI: 10.14569/IJACSA.2021.0120664.
We use cookies on this website to personalize content to improve your user experience and analyze our traffic. By using this site you agree to its use of cookies.