Practical penetration testing is time-intensive and often deviates from theory. We designed a large language model for penetration testing domain, with three key contributions: (1) A specialized LLM trained on 300+ high-quality practical write-ups, combining hacking techniques, tools, and general conversational structure. Our model outperforms open-source general SOTA larger paremeter model and other pentesting LLMs in extensive evaluations. (2) The novel Finding, Action, Reasoning, Result (FARR) Flow augmentation, which compresses penetration testing knowledge into a modular format for diverse evaluation. (3) The Automated Security Pentesting Intelligence for Reasoning Evaluation( ASPIRE) benchmark, simulating dynamic pentesting scenarios. Our model excels in guiding users, especially in insane level machines difficulty.