Journal of Applied Science and Engineering

Published by Tamkang University Press

1.30

Impact Factor

1.60

CiteScore

Guangpeng Yue1, Mingxi Li2, and Hongyan Zheng3This email address is being protected from spambots. You need JavaScript enabled to view it.

1School of Industrial Design, Lu Xun Academy of Fine Arts, Shenyang 110000 China

2School of Media Animation, Lu Xun Academy of Fine Arts, Dalian 116000 China

3Basic Teaching Department, Lu Xun Academy of Fine Arts, Dalian 116000 China


 

 

Received: February 16, 2023
Accepted: March 19, 2024
Publication Date: April 13, 2024

 Copyright The Author(s). This is an open access article distributed under the terms of the Creative Commons Attribution License (CC BY 4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are cited.


Download Citation: ||https://doi.org/10.6180/jase.202502_28(2).0005  


With the rapid development of deep learning, image style transfer has become one of the research hotspots in the field of computer vision. However, there are some problems in the style transfer model, such as fuzzy image details, poor style texture and color effect, and too many model parameters. In this paper, a novel style transfer method based on Generative Adversarial network and feature transformation is proposed. By adding Ghost convolutional module and anti-residual improved module to optimize the generator network structure, this process can reduce the number of model parameters and calculation costs, and enhance the feature extraction ability of the network. Self-attention mechanism is introduced to obtain more image features and improve the generation quality of generator. Content style loss, color reconstruction loss and mapping consistency loss are added to the loss function to improve the generation ability of the model and improve the quality of the generated image. Experimental results on data sets show that the PSNR and SSIM values of images generated by this method are higher than those of comparison methods.


Keywords: deep learning, image style transfer, Generative Adversarial Network, feature transformation, Self-attention mechanism


  1. [1] Y. Jing, Y. Yang, Z. Feng, J. Ye, Y. Yu, and M. Song, (2019) “Neural style transfer: A review" IEEE transactions on visualization and computer graphics 26(11): 3365–3385. DOI: 10.1109/TVCG.2019.2921336.
  2. [2] P. Wang, Y. Li, and N. Vasconcelos, (2021) “Rethinking and Improving the Robustness of Image Style Transfer": 124–133. DOI: 10.1109/CVPR46437.2021.00019.
  3. [3] Y. Deng, F. Tang, W. Dong, C. Ma, X. Pan, L. Wang, and C. Xu. “Stytr2: Image style transfer with transformers”. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022, 11326–11336. DOI: 10.1109/CVPR52688.2022.01104.
  4. [4] W. Wang, S. Yang, J. Xu, and J. Liu, (2020) “Consistent video style transfer via relaxation and regularization" IEEE Transactions on Image Processing 29: 9125–9139. DOI: 10.1109/TIP.2020.3024018.
  5. [5] S. Liu and T. Zhu, (2021) “Structure-guided arbitrary style transfer for artistic image and video" IEEE Transactions on Multimedia 24: 1299–1312. DOI: 10.1109/TMM.2021.3063605.
  6. [6] B. Wang, Y. Zou, L. Zhang, Y. Li, Q. Chen, and C. Zuo, (2022) “Multimodal super-resolution reconstruction of infrared and visible images via deep learning" Optics and Lasers in Engineering 156: 107078. DOI: 10.1016/j.optlaseng.2022.107078.
  7. [7] Y. Jiang and S. Yin, (2023) “Heterogenous-view occluded expression data recognition based on cycle-consistent adversarial network and K-SVD dictionary learning under intelligent cooperative robot environment" Computer Science and Information Systems 20(4): 1869–1883. DOI: 10.2298/CSIS221228034J.
  8. [8] X. Lu, R. Yang, J. Zhou, J. Jiao, F. Liu, Y. Liu, B. Su, and P. Gu, (2022) “A hybrid model of ghost-convolution enlightened transformer for effective diagnosis of grape leaf disease and pest" Journal of King Saud UniversityComputer and Information Sciences 34(5): 1755– 1767. DOI: 10.1016/j.jksuci.2022.03.006.
  9. [9] G. Chen, G. Zhang, Z. Yang, and W. Liu, (2023) “Multi-scale patch-GAN with edge detection for image inpainting" Applied Intelligence 53(4): 3917–3932. DOI: 10.1007/s10489-022-03577-2.
  10. [10] L. Sheng, Z. Lin, J. Shao, and X. Wang. “Avatar-net: Multi-scale zero-shot style transfer by feature decoration”. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2018, 8242–8250. DOI: 10.1109/CVPR.2018.00860.
  11. [11] L. Hollein, J. Johnson, and M. Niener. “Stylemesh: Style transfer for indoor 3d scene reconstructions”. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022, 6198–6208. DOI: 10.1109/CVPR52688.2022.00610.
  12. [12] C. Xie, Z. Wang, H. Chen, X. Ma, and Z. Lin, (2021) “Image Style Transfer Algorithm based on Semantic Segmentation" IEEE Access 9: 54518–54529. DOI: 10.1109/ACCESS.2021.3054969. 
  13. [13] W. Wang, S. Yang, J. Xu, and J. Liu, (2020) “Consistent video style transfer via relaxation and regularization" IEEE Transactions on Image Processing 29: 9125–9139. DOI: 10.1109/TIP.2020.3024018.


    



 

1.6
2022CiteScore
 
 
60th percentile
Powered by  Scopus

SCImago Journal & Country Rank

Enter your name and email below to receive latest published articles in Journal of Applied Science and Engineering.