Medical image fusion technology has been widely used in clinical practice by doctors to better understand lesion regions through the fusion of multiparametric medical images. This paper proposes an automated fusion method based on a U-Net. Through neural network learning, a weight distribution is generated based on the relationship between the image feature information and the multifocus training target. The MRI image pair of prostate cancer (axial T2-weighted and ADC map) is fused using a strategy based on local similarity and Gaussian pyramid transformation. Experimental results show that the fusion method can enhance the appearance of prostate cancer in terms of both visual quality and objective evaluation.