Journal of Applied Science and Engineering

Published by Tamkang University Press

1.30

Impact Factor

2.10

CiteScore

Xinwei LiuThis email address is being protected from spambots. You need JavaScript enabled to view it.

College of Economic and Management, Shenyang Institute of Technology, Shenyang 113122 China


 

Received: January 7, 2026
Accepted: January 31, 2026
Publication Date: February 26, 2026

 Copyright The Author(s). This is an open access article distributed under the terms of the Creative Commons Attribution License (CC BY 4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are cited.


Download Citation: ||https://doi.org/10.6180/jase.202608_31.031  


Few-shot image classification (FSL) aims to recognize novel classes with extremely limited labeled samples, which poses a critical challenge to traditional deep learning methods relying on massive annotated data. Meta-learning and metric learning have emerged as promising paradigms to address this issue, but existing approaches still suffer from prototype instability, insufficient feature discriminability, and poor generalization to domain shifts. To overcome these limitations, this paper proposes a Meta-Learning Enhanced Hybrid Frame work (MLE-HF) that integrates prototypical networks (PN) with contrastive learning (CL) for few-shot image classification. Specifically, we design a dual-branch architecture: the primary branch leverages meta-learned prototype generation to capture task-specific semantic representations, while the auxiliary branch employs supervised contrastive learning to enhance feature separability in the embedding space. We introduce a dynamic prototype calibration mechanism based on expectation-maximization (EM) algorithm, which iteratively refines class prototypes using both support set samples and high-confidence query set pseudo-samples. Moreover, a novel Meta-Contrastive Loss (MCL) is proposed to align meta-training and meta-testing distributions, ensuring the transferability of learned features. We benchmark MLE-HF on mini Image Net, tiered Image Net, CUB-200 2011 and FC100 under 1-shot and 5-shot protocols. It sets a new state-of-the-art, reaching 58.71 % on 5-way 1-shot and 76.39% on 5-way 5-shot miniImageNet. Ablations pinpoint the contribution of every module, while visualizations reveal sharper, more robust embeddings. By intertwining metric and contrastive losses inside a meta-learning loop, the framework offers a fresh recipe for accurate few-shot recognition in data-scarce settings.


Keywords: Few-Shot Image Classification; Meta-Learning; Prototypical Networks; Contrastive Learning; Feature Representation; Expectation-Maximization Algorithm


  1. [1] V. Gorokhovatskyi, Y. Chmutov, I. Tvoroshenko, and O. Kobylin, (2025) “Reducing computational costs by compressing the structural description in image classification methods" Advanced Information Systems 9(1): 5–12. DOI: 10.20998/2522-9052.2025.1.01.
  2. [2] X. Meng, X. Wang, S. Yin, and H. Li, (2023) “Few-shot image classification algorithm based on attention mechanism and weight fusion" Journal of Engineering and Applied Science 70(1): 14. DOI: 10.1186/s44147-023 00186-9.
  3. [3] X. Li, X. Yang, Z. Ma, and J.-H. Xue, (2023) “Deep metric learning for few-shot image classification: A review of recent developments" Pattern Recognition 138: 109381. DOI: 10.1016/j.patcog.2023.109381.
  4. [4] Y. Zhang, X. Guo, H. Leung, and L. Li, (2023) “Cross task and cross-domain SAR target recognition: A meta transfer learning approach" Pattern Recognition 138: 109402. DOI: 10.1016/j.patcog.2023.109402.
  5. [5] J. Duan andY. Zou, (2025) “Discriminative projective dictionary pair based broad metric learning system: algorithm and its applications in pattern classification" Artificial Intelligence Review 58(10): 319. DOI: 10.1007/s10462-025-11324-5.
  6. [6] Z. Ji, X. Chai, Y. Yu, Y. Pang, and Z. Zhang, (2020) “Improved prototypical networks for few-shot learning" Pattern Recognition Letters 140: 81–87. DOI: 10.1016/j.patrec.2023.06.015.
  7. [7] Z. Xu, X. Chen, W. Tang, J. Lai, and L. Cao, (2021) “Meta weight learning via model-agnostic meta-learning" Neurocomputing 432: 124–132. DOI: 10.1016/j.neucom.2020.08.034.
  8. [8] G. Zhang, G. Yuan, D. Cheng, L. Liu, J. Li, and S. Zhang, (2025) “Disentangled contrastive learning for fair graph representations" Neural Networks 181: 106781. DOI: 10.1016/j.neunet.2024.106781.
  9. [9] K. Li, Y. Zhang, X. Li, M. Yuan, and W. Zhou, (2025) “Mask diffusion-based contrastive learning for knowledge aware recommendation" IEEE Transactions on Knowledge and Data Engineering 37(9): 5407–5419. DOI: 10.1109/TKDE.2025.3582767.
  10. [10] T. Wei, C. Yang, and Y. Zheng, (2025) “Proto typical Graph Contrastive Learning for Recommendation" Applied Sciences 15(4): 1961. DOI: 10.3390/app15041961.
  11. [11] A. D. Fernando, T. Anirudh, S. Palanisamy, K. Prasad, K. Alexander, P. Veluswamy, and R. Palanisamy, (2025) “Depth integrated Multi-Task Prototypical Learning with Self refinement for Unsupervised Domain Adaptation" IEEE Access 13: 95706–95716. DOI: 10.1109/ACCESS.2025.3572013.
  12. [12] Q. Huang, C. Li, Y. Han, J. Shang, and Y. Zhang, (2025) “Semi-supervised prototype networks with similarity information selection for fault diagnosis of wind turbine gearboxes" IEEE Transactions on Instrumentation and Measurement 74: DOI: 10.1109/TIM.2025.3533631.
  13. [13] G. Liu, K. Gu, H. Jiang, J. Zhong, and J. Zhong, (2025) “A semi-supervised prototypical network with dual correction for few-shot cross-machine fault diagnosis" Measurement Science and Technology 36(4): 046135. DOI: 10.1088/1361-6501/adc7d0.
  14. [14] Y. Iwasaki, Y. Sasaki, T. Nagata, S. Kaneko, and T. Nonomura, (2025) “Dynamic mode decomposition based on expectation–maximization algorithm for simultaneous system identification and denoising" Mechanical Systems and Signal Processing 223: 111864. DOI: 10.1016/j.ymssp.2024.111864.
  15. [15] S. Yin, L. Wang, T. Chen, H. Huang, J. Gao, J. Zhang, M. Liu, P. Li, and C. Xu, (2025) “LKAFormer: A lightweight kolmogorov-arnold transformer model for image semantic segmentation" ACM Transactions on In telligent Systems and Technology: DOI: 10.1145/3759254.
  16. [16] X. Zhang, Z. Chen, J. Zhang, T. Liu, and D. Tao, (2025) “Learning general and specific embedding with transformer for few-shot object detection" International Journal of Computer Vision 133(2): 968–984. DOI: 10.1007/s11263-024-02199-0.
  17. [17] W. Xu, X. Zhou, S. Xu, F. Liu, C. Zhang, F. Li, W. Cai, and J. Zhou, (2025) “Multi-stage Bayesian Prototype Refinement with feature weighting for few-shot classification" Pattern Analysis and Applications 28(3): 145. DOI: 10.1007/s10044-025-01520-y.
  18. [18] V. Gryshchuk, D. Singh, S. Teipel, M. Dyrba, and F. s. g. ADNI AIBL, (2025) “Contrastive self-supervised learning for neurodegenerative disorder classification" Frontiers in Neuroinformatics 19: 1527582. DOI: 10.3389/fninf.2025.1527582.


    



 

2.1
2023CiteScore
 
 
69th percentile
Powered by  Scopus

SCImago Journal & Country Rank

Enter your name and email below to receive latest published articles in Journal of Applied Science and Engineering.