Journal of Applied Science and Engineering

Published by Tamkang University Press

1.30

Impact Factor

2.10

CiteScore

Chunlin YuanThis email address is being protected from spambots. You need JavaScript enabled to view it.

School of Civil Engineering and Architecture, Zhengzhou University of Science and Technology, Zhengzhou, 450064, China


 

 

Received: January 3, 2024
Accepted: March 4, 2024
Publication Date: April 13, 2024

 Copyright The Author(s). This is an open access article distributed under the terms of the Creative Commons Attribution License (CC BY 4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are cited.


Download Citation: ||https://doi.org/10.6180/jase.202502_28(2).0002  


Conventional few-shot multimedia image recognition methods in education management ignore the important semantic information in the training samples, resulting in insufficient feature learning, which is difficult to solve the problem of large intra-class variation. In this paper, we propose a global feature learning method based on multi-level semantic fusion. Specifically, according to the characteristics of different semantic levels of training samples, we design and implement different semantic learning tasks at the sample level, class level and task level, respectively. The semantic learning task and few-shot image classification task are integrated into the same architecture through the multi-task learning framework, which fuses the multi-level semantic information of categories and the discriminative information between classes. Therefore, the model can learn the category features from multiple perspectives, better find the commonality between samples with large differences, enhance the representativeness of the features. Compared with the baseline method, the large accuracy improvement is obtained on three datasets.


Keywords: Multimedia image recognition; education management; few-shot learning; multi-level semantic fusion


  1. [1] Y. S., (2023) “Object Detection Based on Deep Learning: A Brief Review" IJLAI Trans. Sci. Eng. 1–6.
  2. [2] J. Gao, M. Liu, P. Li, J. Zhang, and Z. Chen, (2023) “Deep Multiview Adaptive Clustering With Semantic Invariance" IEEE Trans. Neural Net. Learn. Syst. DOI: 10.1109/TNNLS.2023.3265699.
  3. [3] P. Li, J. Gao, J. Zhang, S. Jin, and Z. Chen, (2022) “Deep Reinforcement Clustering" IEEE Trans. Multimed. DOI: 10.1109/TMM.2022.3233249.
  4. [4] M. Zhang, S. Huang, W. Li, and D. Wang. “Tree structure-aware few-shot image classification via hierarchical aggregation”. In: Eur. Conf. on Comput. Vis. 2022, 453–470.
  5. [5] J. Y. Lim, K. M. Lim, C. P. Lee, and Y. X. Tan, (2023) “SCL: Self-supervised contrastive learning for few-shot image classification" Neu. Net. 165: 19–30. DOI: 10.1016/j.neunet.2023.05.037.
  6. [6] Y. Liu and et al., (2023) “Bilaterally normalized scaleconsistent sinkhorn distance for few-shot image classification" IEEE Trans. Neural Net. Learn. Syst. 34(12): 12203–12213.
  7. [7] X. Li, J. Wu, Z. Sun, Z. Ma, J. Cao, and J.-H. Xue, (2020) “BSNet: Bi-similarity network for few-shot finegrained image classification" IEEE Trans. Image Process. 30: 1318–1331.
  8. [8] F. Chelsea, A. Pieter, and S. Levine. “Model agnostic meta-learning for fast adaptation of deep networks”. In: Int. Conf. Mach. Learn. 2017, 1126–1135.
  9. [9] M. A. Jamal and et al. “Task agnostic meta-learning for few-shot learning”. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. 2019, 11719–11727.
  10. [10] N. Lai and et al., (2021) “Learning to learn adaptive classifier-predictor for few-shot learning" IEEE Trans. Neural Net. Learn. Syst. 32(8): 3458–3470. DOI: 10.1109/TNNLS.2020.3011526.
  11. [11] J. Snell and et al. “Prototypical networks for fewshot learning”. In: Conf. Neural Inf. Process. Syst. 2017, 4077–4087.
  12. [12] H. Ye and et al. “Few-shot learning via embedding adaptation with set-to-set functions”. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. 2020, 8808–8817.
  13. [13] W. Zhang, Y. Zhao, Y. Gao, and C. Sun, (2024) “Reabstraction and perturbing support pair network for fewshot fine-grained image classification" Pattern Recognit. 148: 110158.
  14. [14] O. Vinyals, C. Blundell, T. Lillicrap, D. Wierstra, et al., (2016) “Matching networks for one shot learning" Adv. Neural Inf. Process. Syst. 29:
  15. [15] H. Ye and et al. “Memory matching networks for oneshot image recognition”. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. 2018, 4080–4088.
  16. [16] F. Sung and et al. “Learning to compare: Relation network for few-shot learning”. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. 2018, 1199–1208.
  17. [17] X. Li, Y. Li, Y. Zheng, R. Zhu, Z. Ma, J.-H. Xue, and J. Cao, (2023) “ReNAP: Relation network with adaptiveprototypical learning for few-shot classification" Neurocomp. 520: 356–364.
  18. [18] Y. An and et al. “Conditional Self-Supervised Learning for Few-Shot Classification”. In: Proc. Int. Joint Conf. Artif. Intell. 2021, 2140–2146.
  19. [19] Q. Liu, W. Cao, and Z. He, (2023) “Cycle optimization metric learning for few-shot classification" Pattern Recognit. 139: 109468.
  20. [20] M. Patacchiola, J. Turner, E. J. Crowley, and et al. “Bayesian meta-learning for the few-shot setting via deep kernels”. In: Adv. Neural Inf. Process. Syst. 2020, 16108–16118.
  21. [21] T. Chen and et al. “A Closer Look at Few-shot Classification”. In: Proc. Int. Conf. Learn. Represent. 2019.
  22. [22] H. B. Lee, S. Hong, J. Shin, and et al. “Self-supervised label augmentation via input transformations”. In: Proc. Int. Conf. Mach. Learn. 2020, 5714–5724.
  23. [23] J. Oh and et al. “BOIL: Towards representation change for few-shot learning”. In: Proc. Int. Conf. Learn. Represent. 2021.
  24. [24] Q. Bouniot and et al. “Improving Few-Shot Learning Through Multi-task Representation Learning Theory”. In: Eur. Conf. on Comput. Vis. 2022, 435–452.


    



 

2.1
2023CiteScore
 
 
69th percentile
Powered by  Scopus

SCImago Journal & Country Rank

Enter your name and email below to receive latest published articles in Journal of Applied Science and Engineering.