Emotion recognition, a challenging computational issue, finds interesting applications in diverse fields. Usually, feature-based machine-learning methods have been used for emotion recognition. However, these conventional shallow machine learning methods often find unsatisfactory results as there is a tradeoff between feature dimensions and classification accuracy. Besides, extraction and selection of features from the spatial and frequency domains could be an additional issue. This work proposes a method that transforms EEG (electroencephalography) signals to topographic images that contain the frequency and spatial information and utilizes a convolutional neural network (CNN) to classify the emotion, as CNN has improved feature extraction capability. According to the proposed method, the topographic images are prepared from the relative power spectral density rather than power spectral density that shows remarkable improvement in classification accuracy. The proposed method is applied to the well-known SEED database and has given outperforming results than the current state-of-the-art.