Journal of Applied Science and Engineering

Published by Tamkang University Press

1.30

Impact Factor

2.10

CiteScore

Fangqi SongThis email address is being protected from spambots. You need JavaScript enabled to view it.

School of Foreign Languages, Zhengzhou University of Science and Technology, Zhengzhou 450064, China.


 

 

Received: January 9, 2025
Accepted: March 6, 2025
Publication Date: March 28, 2025

 Copyright The Author(s). This is an open access article distributed under the terms of the Creative Commons Attribution License (CC BY 4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are cited.


Download Citation: ||https://doi.org/10.6180/jase.202512_28(12).0005  


Relation extraction is the core task of information extraction. How to extract important relation features quickly and accurately from massive English texts has become a difficult task in English text relation extraction. Meanwhile, Prefix-tuning is widely used in zero-shot natural language processing tasks, but the existing zero shot relationship extraction model based on Prefix-tuning is difficult to construct answer space mapping and depends on manual template selection, which cannot achieve good results. To solve the above problems, this paper proposes a novel zero-shot English text relation extraction model based on multi-Prefix-tuning template fusion via BERT. Firstly, the task of extracting zero-shot relation is defined as the task of mask language model, and the construction of answer space mapping is abandoned. The words output from template and the relation description text are compared in the word vector space to judge the relation category. Secondly, the part of speech of the description text of the relation class to be extracted is introduced as the feature, and the weight between the feature and each template is learned. Finally, this weight is used to fuse the results of multiple template outputs to reduce the performance penalty caused by manually selected Prefix-tuning templates. The experimental results show that the F1 values obtained by MPTTF-BERT model on DuIE, COAE-2016-Task3, FinRE data sets with different features are 93.73%,91.49% and 49.46%, respectively, which is significantly better than the comparison models. In addition, the ablation experiment and fixed-length selection experiments are used to further verify that the MPTTF-BERT model can effectively improve the effect of English text relation extraction, indicating the feasibility and effectiveness of the new method.


Keywords: zero-shot English text relation extraction; multi-Prefix-tuning template fusion; BERT; mask language model.


  1. [1] Y.Lin,H.Ji,F.Huang,andL.Wu.“AJointNeural Model for Information Extraction with Global Features.”In:Proceedingsofthe58thAnnualMeetingofthe AssociationforComputationalLinguistics.2020,7999–8009. DOI:10.18653/v1/2020.acl-main.713.
  2. [2] J.Guo, Y.Fan, L.Pang, L.Yang, Q.Ai, H.Zamani, C.Wu, W.B.Croft, and X.Cheng, (2020)“ADeep Look into neural ranking models for information retrieval" Information Processing & Management57(6):DOI: 10.1016/j.ipm.2019.102067.
  3. [3] Z.Yue, L.Hang, Y.Shoulin, and S.Yang,(2018)“A New Chinese Word Segmentation Method Based on Maximum Matching" Journal of Information Hiding and MultimediaSignalProcessing.9(6):1528–1535.
  4. [4] M.Ji,(2021)“DesignofStudentDataAcquisitionand ExtractionPlatformBasedonBigDataTechnology"2021 2nd International Conference on Education, Knowledge and Information Management (ICEKIM):DOI: 10.1109/ICEKIM52309.2021.00096.
  5. [5] G.Zhiqiang, C.GuoFei, H.Yongming, L.Gang, and L.Fang,(2020)“Semanticrelationextractionusingse quential and tree-structured LSTM with attention" Information Sciences 509: 183–192. DOI: 10.1016/J.INS.2019.09.006.
  6. [6] C. Qu, L. Yang, C. Chen, M. Qiu, W. B. Croft, and M. Iyyer. “Open-Retrieval Conversational Question Answering”. In: Proceedings of the 43rd International ACMSIGIR conference on research and development in Information Retrieval, SIGIR 2020, Virtual Event, China, July 25-30, 2020. ACM, 2020, 539–548. DOI: 10.1145/3397271.3401110.
  7. [7] H. Guimei and C. Huang. “Deep Information Retrieval-Oriented Smart System for Construction Project Quota Standards”. In: 2024 International Conference on Expert Clouds and Applications (ICOECA). IEEE, 2024. DOI: 10.1109/ICOECA62351.2024.00023.
  8. [8] L. Xiaofeng, L. Bin, Z. Gang, R. Yibin, Z. Shuang shang, L. Yingjie, G. Le, L. Yuhai, Z. Bin, and W. Fan, (2020) “Deep learning-based information mining from ocean remote sensing imagery" National Science Re view 7: 1584–1605. DOI: 10.1093/nsr/nwaa047.
  9. [9] Z. Lu and Y. Jing, (2021) “A novel deep CNN method based on aesthetic rule for user preferential images rec ommendation" Journal of Applied Science and Engineering 24(1): 49–55. DOI: 10.6180/jase.202102_24(1).0006.
  10. [10] H. Yu, H. Li, D. Mao, and Q. Cai, (2020) “A relationship extraction method for domain knowledge graph construction" World Wide Web 23(2): 735–753. DOI: 10.1007/S11280-019-00765-Y.
  11. [11] Z. Jie, J. Li, and W. Lu. “Learning to Reason Deduc tively: Math Word Problem Solving as Complex Re lation Extraction”. In: Proceedings of the 60th Annual Meeting of the Association for Computational Linguis tics (Volume 1: Long Papers), ACL 2022, Dublin, Ireland, May22-27, 2022. Association for Computational Linguistics, 2022, 5944–5955. DOI: 10.18653/V1/2022.ACL-LONG.410.
  12. [12] Y. Qin, W. Yang, K. Wang, R. Huang, F. Tian, S. Ao, and Y. Chen, (2021) “Entity Relation Extraction Based on Entity Indicators" Symmetry 13(4): 539. DOI: 10.3390/SYM13040539.
  13. [13] Z.Geng, Y.Zhang, andY.Han,(2021)“Jointentity and relation extraction model based on rich semantics" Neuro computing 429: 132–140. DOI: 10.1016/J.NEUCOM.2020.12.037.
  14. [14] G.Muhammad, S.Naveed, L.Nadeem, T.Mahmood, A. R. Khan, Y. Amin, and S. A. O. Bahaj, (2023) “Enhancing Prognosis Accuracy for Ischemic Cardiovascular Disease Using K Nearest Neighbor Algorithm: A Ro bust Approach" IEEE Access 11: 97879–97895. DOI: 10.1109/ACCESS.2023.3312046.
  15. [15] D. Wang, W. Hu, E. Cao, and W. Sun. “Global-to Local Neural Networks for Document-Level Relation Extraction”. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020. Ed. by B. Webber, T. Cohn, Y. He, and Y. Liu. Association for Computational Linguistics, 2020, 3711–3721. DOI: 10.18653/V1/2020.EMNLP-MAIN.303.
  16. [16] M. Miwa and M. Bansal. “End-to-End Relation Ex traction using LSTMs on Sequences and Tree Structures”. In: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016, August 7-12, 2016, Berlin, Germany, Volume 1: Long Pa pers. The Association for Computer Linguistics, 2016. DOI: 10.18653/V1/P16-1105.
  17. [17] Z. Geng, G. Chen, Y. Han, G. Lu, and F. Li, (2020) “Semantic relation extraction using sequential and tree structured LSTM with attention" Inf. Sci. 509: 183–192. DOI: 10.1016/J.INS.2019.09.006.
  18. [18] T. Chen and Y. Hu, (2021) “Entity relation extraction from electronic medical records based on improved annotation rules and Bi LSTM-CRF." Annals of Translational Medicine 9(18): DOI: 10.21037/atm-21-3828.
  19. [19] X. Xu, T. Gao, Y. Wang, and X. Xuan, (2022) “Event Temporal Relation Extraction with Attention Mechanism and Graph Neural Network" Tsinghua Science and Technology 27(1): 79–90. DOI: 10.26599/TST.2020.9010063.
  20. [20] J. Hou, X. Li, H. Yao, H. Sun, T. Mai, and R. Zhu, (2020) “BERT-Based Chinese Relation Extraction for Pub licSecurity" IEEE Access 8: 132367–132375. DOI: 10.1109/ACCESS.2020.3002863.
  21. [21] T. Nayak and H. T. Ng. “Effective Modeling of Encoder-Decoder Architecture for Joint Entity and Relation Extraction”. In: The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020. AAAI Press, 2020, 8528–8535. DOI: 10.1609/AAAI.V34I05.6374.
  22. [22] W.U.Ahmad, N.Peng, and K.Chang.“GATE:Graph Attention Transformer Encoder for Cross-lingual Re lation and Event Extraction”. In: Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021, Thirty Third Conference on Innovative Applications of Artificial Intelligence, IAAI 2021, The Eleventh Symposium on Educational Advances in Artificial Intelligence, EAAI 2021, Virtual Event, February 2-9, 2021. AAAI Press, 2021, 12462–12470. DOI: 10.1609/AAAI.V35I14.17478.
  23. [23] P. H. Cabot and R. Navigli. “REBEL: Relation Ex traction By End-to-end Language generation”. In: Findings of the Association for Computational Linguis tics: EMNLP 2021, Virtual Event / Punta Cana, Domini can Republic, 16-20 November, 2021. Ed. by M. Moens, X. Huang, L. Specia, and S. W. Yih. Association for Computational Linguistics, 2021, 2370–2381. DOI: 10.18653/V1/2021.FINDINGS-EMNLP.204.
  24. [24] B. Qiao, Z. Zou, Y. Huang, K. Fang, X. Zhu, and Y. Chen, (2022) “A joint model for entity and relation ex traction based on BERT" Neural Comput. Appl. 34(5): 3471–3481. DOI: 10.1007/S00521-021-05815-Z.
  25. [25] X. Hanand L.Wang, (2020) “A Novel Document-Level Relation Extraction Method Based on BERT and Entity Information" IEEE Access 8: 96912–96919. DOI: 10. 1109/ACCESS.2020.2996642.
  26. [26] D. Christou and G. Tsoumakas, (2021) “Improving Distantly-Supervised Relation Extraction Through BERT Based Label and Instance Embeddings" IEEE Access 9: 62574–62582. DOI: 10.1109/ACCESS.2021.3073428.
  27. [27] C. Chen and C. Li. “ZS-BERT: Towards Zero-Shot Relation Extraction with Attribute Representation Learning”. In: Proceedings of the 2021 Conference of the North American Chapter of the Association for Com putational Linguistics: Human Language Technologies, NAACL-HLT 2021, Online, June 6-11, 2021. Association for Computational Linguistics, 2021, 3470–3479. DOI: 10.18653/V1/2021.NAACL-MAIN.272.
  28. [28] J. H. Caufield, H. Hegde, V. Emonet, N. L. Harris, M. P. Joachimiak, N. Matentzoglu, H. Kim, S. Moxon, J. T. Reese, M. A. Haendel, P. N. Robinson, and C. J. Mungall, (2024) “Structured Prompt Interrogation and Recursive Extraction of Semantics (SPIRES): a method for populating knowledge bases using zero-shot learning" Bioinformatics 40(3): btae104. DOI: 10.1093/ bioinformatics/btae104.
  29. [29] F. Liu, H. Lin, X. Han, B. Cao, and L. Sun, (2022) “Pre training to match for unified low-shot relation extraction" arXiv preprint arXiv:2203.12274: DOI: 10.18653/V1/2022.ACL-LONG.397.
  30. [30] Y. Xiao, C. Tan, Z. Fan, Q. Xu, and W. Zhu. “Joint Entity and Relation Extraction with a Hybrid Trans former and Reinforcement Learning Based Model”. In: The Thirty-Fourth AAAI Conference on Artificial In telligence, AAAI 2020, The Thirty-Second Innovative Ap plications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020. AAAI Press, 2020, 9314–9321. DOI: 10.1609/AAAI.V34I05.6471.
  31. [31] H.Chen, P.Hong, W.Han, N.Majumder, and S.Poria, (2023) “Dialogue Relation Extraction with Document Level Heterogeneous Graph Attention Networks" Cogn. Comput. 15(2): 793–802. DOI: 10.1007/S12559-023-10110-1.
  32. [32] Y. Zhao, H. Li, and S. Yin, (2022) “A Multi-channel Character Relationship Classification Model Based on Attention Mechanism" International Journal of Mathematical Sciences and Computing(IJMSC) 8(1): 9. DOI: 10.5815/ijmsc.2022.01.03.
  33. [33] K. Zhao, H. Xu, Y. Cheng, X. Li, and K. Gao, (2021) “Representation iterative fusion based on heterogeneous graph neural network for joint entity and relation extraction" Knowl. Based Syst. 219: 106888. DOI: 10.1016/J. KNOSYS.2021.106888.
  34. [34] Q. Li, Y. Wang, T. You, and Y. Lu, (2022) “BioKnow Prompt: Incorporating imprecise knowledge into prompt tuning verbalizer with biomedical text for relation extraction" Inf. Sci. 617: 346–358. DOI: 10.1016/J.INS.2022.10.063.
  35. [35] D. Shu and A.B.Farimani, (2024) “Zero-Shot Uncertainty Quantification using Diffusion Probabilistic Models" CoRR abs/2408.04718: DOI: 10.48550/ARXIV. 2408.04718. arXiv: 2408.04718.
  36. [36] D. Peng, D. Zhang, C. Liu, and J. Lu, (2020) “BG SAC: Entity relationship classification model based on Self-Attention supported Capsule Networks" Appl. Soft Comput. 91: 106186. DOI: 10.1016/J.ASOC.2020.106186.
  37. [37] Y. Zhang, (2023) “Relation extraction in Chinese using attention-based bidirectional long short-term memory net works" PeerJ Computer Science 9: e1509.
  38. [38] S. Yin, H. Li, A. A. Laghari, T. R. Gadekallu, G. A. R. Sampedro, and A. S. Almadhor, (2024) “An Anomaly Detection Model Based on Deep Auto-Encoder and Cap sule Graph Convolution via Sparrow Search Algorithm in 6G Internet of Everything" IEEE Internet Things J. 11(18): 29402–29411. DOI: 10.1109/JIOT.2024.3353337.
  39. [39] H. Fabregat, A. Duque, J. Martínez-Romo, and L. Araujo, (2023) “Negation-based transfer learning for improving biomedical Named Entity Recognition and Relation Extraction" J. Biomed. Informatics 138: 104279. DOI: 10.1016/J.JBI.2022.104279.
  40. [40] C. Peng, X. Yang, Z. Yu, J. Bian, W. R. Hogan, and Y. Wu, (2023) “Clinical concept and relation extraction using prompt-based machine reading comprehension" J. Am. Medical Informatics Assoc. 30(9): 1486–1493. DOI: 10.1093/JAMIA/OCAD107.
  41. [41] D. Sui, X. Zeng, Y. Chen, K. Liu, and J. Zhao, (2024) “Joint Entity and Relation Extraction With Set Prediction Networks" IEEE Trans. Neural Networks Learn. Syst. 35(9): 12784–12795. DOI: 10.1109/TNNLS.2023.3264735.


    



 

2.1
2023CiteScore
 
 
69th percentile
Powered by  Scopus

SCImago Journal & Country Rank

Enter your name and email below to receive latest published articles in Journal of Applied Science and Engineering.