Fine-Tuning Pre-Trained language models for Vietnamese: A comparative study of full fine-tuning and Lora
Abstract
Pre-trained language models have brought significant improvements to Vietnamese natural language processing tasks. However, full fine-tuning of these large models remains resource-intensive and poses challenges for settings with limited computational capacity. This paper presents a comparative study between full fine-tuning and Low-Rank Adaptation – a parameter-efficient fine-tuning method, focusing on the trade-off between model performance and resource usage. Experiments are conducted on two core NLP tasks – sentiment analysis and named entity recognition – using benchmark Vietnamese datasets and pre-trained models such as PhoBERT, BARTpho and ViT5. The results show that LoRA achieves comparable accuracy to full fine-tuning while significantly reducing training cost, especially in transformer-based architectures. These findings suggest that LoRA is a viable and efficient alternative to full fine-tuning for fine-tuning Vietnamese PLMs in low-resource environments. Our work provides practical insights and experimental benchmarks to support informed decision-making in selecting fine-tuning strategies for Vietnamese NLP applications.
