Image restoration presents a significant challenge in the realm of computer vision, aiming to recreate high-quality images from their low-quality, degraded counterparts. This issue spans various domains, including photography, medical imaging, and autonomous systems. The rise of deep learning has led to substantial advancements in this area, introducing techniques such as Convolutional Neural Networks (CNNs), Generative Adversarial Networks (GANs), Transformers, and Diffusion Models (DMs). However, each of these approaches has its own set of limitations. For instance, CNNs often fail to effectively capture long-range dependencies, DMs depend on resource-intensive denoising processes, and transformers experience quadratic complexity as the size of the input increases. Recently, State-space models particularly, Mamba have gained considerable interest in recent years as promising alternatives due to their linear complexity. However, Mamba’s inherent causal modeling nature restricts its ability to model spatial relationships effectively on Image data. While previous research has attempted to alleviate the shortcoming through multi-directional scanning but increase computational complexity as a trade-off.
To address this challenge, we propose Graph Vision Mamba (GVMamba), a novel framework that integrates a Graph Neural Network (GNN) into the Mamba architecture. By leveraging GNNs, our model enhances spatial information flow and enable image feature interaction while preserving computational efficiency. Experimental results demonstrate that GVMamba outperforms existing state-space and transformer-based models in image restoration tasks such as Image Rain drop Removal and Rain-Streak Removal(Derain), offering a scalable and effective solution for real-world applications.