In this study, we have developed a Large Language Model (LLM) that automates the testing script. Our model is a fine-tuned version of the polyglot-ko model, a pre-trained model with 5.8 billion parameters. In this paper, we performed RLHF (Reinforcement Learning with Human Feedback) to apply domain-specific data to the Large Language Model (LLM). Our fine-tuned LLM, named the LG model, is specialized for the task of generating product testing scripts. We collected and refined domain-specific data to train the model and conducted human evaluations for assessing its performance in providing assistance.