Fine-Tuning Pre-Trained language models for Vietnamese: A comparative study of full fine-tuning and Lora

Authors

  • Xanh Van Nguyen XanhNguyen
  • Hao Nhat Van Luu
  • Hung Dinh
  • Hoa Minh Dinh

Abstract

Pre-trained language models have brought significant improvements to Vietnamese natural language processing tasks. However, full fine-tuning of these large models remains resource-intensive and poses challenges for settings with limited computational capacity. This paper presents a comparative study between full fine-tuning and Low-Rank Adaptation – a parameter-efficient fine-tuning method, focusing on the trade-off between model performance and resource usage. Experiments are conducted on two core NLP tasks – sentiment analysis and named entity recognition – using benchmark Vietnamese datasets and pre-trained models such as PhoBERT, BARTpho and ViT5. The results show that LoRA achieves comparable accuracy to full fine-tuning while significantly reducing training cost, especially in transformer-based architectures. These findings suggest that LoRA is a viable and efficient alternative to full fine-tuning for fine-tuning Vietnamese PLMs in low-resource environments. Our work provides practical insights and experimental benchmarks to support informed decision-making in selecting fine-tuning strategies for Vietnamese NLP applications.

Downloads

Published

27-10-2025

How to Cite

Van Nguyen, X., Nhat Van Luu, H., Dinh, H., & Minh Dinh, H. (2025). Fine-Tuning Pre-Trained language models for Vietnamese: A comparative study of full fine-tuning and Lora. HUFLIT Journal of Science, 9(3), 36. Retrieved from https://hjs.huflit.edu.vn/index.php/hjs/article/view/275

Issue

Section

Science and Technology

Categories