Recent advancements in deep learning have shown significant potential in developing barrier-free environments for the hearing-impaired. However, the lack of diverse and specialized datasets limits the development of high-level services such as interactive museum exhibitions. In this paper, we introduce KSL-Ex, a Korean Sign Language (KSL) dataset designed to support real-world interactive exhibition services. Our dataset comprises 29,574 sign language video samples, including isolated words, continuous KSL sentences, question- answer pairs, and detailed annotations. In addition, we propose an interactive sign language question answering framework that leverages state-of-the-art large language models which have demonstrated outstanding performance in question an- swering tasks. The proposed framework consists of four key components: The sign language recognition (SLR) model, the answering module, the refinement module, and the 3D sign lan- guage animation module. These components effectively process sign language queries, generate proper answers, and produce sign language animations to provide interactive communication interfaces for the hearing-impaired. Experimental results show the effectiveness of both KSL-Ex and the proposed framework, demonstrating their potential in providing real-world interac- tive sign language services.