Retinal fundus images are widely utilized in clinical screening and diagnosis of ocular diseases. However, in practical scenarios, the obtained fundus images are prone to be low-resolution (LR). LR fundus images increase the uncertainty in clinical observations, thereby heightening the risk of misdiagnosis. Despite this, popular super-resolution methods often yield unsatisfied results, especially when fundus images suffer from multiple degradations. In this article, we first analyze the degradation models of fundus images during the super-resolution reconstruction (SR) process. Then, we specially design a dual-branch U-Net network for retinal fundus images SR which incorporates our proposed muti-scale residual encoders (MSRE) with channel and spatial attention block (CSAB), while simultaneously preserving anatomical retinal structures and global features during SR process. Extensive experimental results on synthetically degraded LR fundus images demonstrate that our method can produce favorable results across multiple degradation models.