Domain Generalization (DG) seeks to develop a model, trained on multiple known source domains, capable of effectively generalizing to unseen target domains. A prominent strategy in DG involves training an encoder to produce domain-invariant representations. However, this approach becomes impractical in the context of Federated Domain Generalization (FDG), where data from diverse domains are distributed across multiple clients, precluding the formulation of a unified loss function that leverages data from all domains simultaneously. To address this challenge, we propose a novel method termed Federated Learning via On-server Matching Gradient (FedOMG), which \emph{efficiently leverage domain-specific information from distributed domains}. Specifically, FedOMG employs local gradients as proxies for the characteristics of distributed models, identifying an invariant gradient direction across all domains by maximizing the inner product of these gradients. This approach offers two key benefits: 1) it enables the aggregation of distributed model features on a centralized server without incurring additional communication overhead, and 2) its on-server aggregation design renders FedOMG complementary to many existing Federated Learning (FL) and FDG methodologies, facilitating seamless integration for enhanced performance. Extensive experimental evaluations across diverse scenarios underscore the robustness of FedOMG, demonstrating its superior performance relative to other FL and FDG baselines. Notably, our method surpasses recent state-of-the-art approaches on four FL benchmark datasets MNIST, EMNIST, CIFAR-10, and CIFAR-100, as well as three FDG benchmark datasets PACS, VLCS, and OfficeHome.