In this thesis, I investigate a Semantic Communication (SemCom) system, focusing on computation and communication efficiency as well as overall system performance. To enhance the efficiency of SemCom systems, I introduce a novel metric termed "distortion resilience." This metric enables the estimation of model predictions at the receivers’ end without requiring explicit feedback, thereby improving the system's robustness and reducing communication overhead.
Regarding encoding-decoding performance at both the transmitter and receiver, I propose a semantic knowledge extraction and aggregation framework for Deep Joint Source-Channel Coding (DJSCC). This approach significantly enhances the accuracy of data reconstruction and task-oriented SemCom, while simultaneously minimizing the communication load.
Recognizing the limitations of existing training paradigms―namely, centralized learning and localized learning―such as privacy concerns, communication overhead, and domain shift across users, I propose an efficient Federated Learning (FL) framework for training the DJSCC models within SemCom systems.
To further improve the communication efficiency of FL-based SemCom, I introduce a high-compression strategy for model transmission, reducing the bandwidth requirements for FL updates. Additionally, I propose two novel server-side model integration techniques aimed at mitigating domain shifts among heterogeneous users, thereby enhancing model generalization and convergence across the distributed system.