In this work, a comparison of Greek Orthodox religious chants is performed using fuzzy entropy. Using a dataset of chant performances, each recitation is segmented into overlapping time windows, and the fuzzy entropy of each window in the frequency domain is computed. We introduce a novel audio fingerprinting framework by comparing the variations in the resulting fuzzy entropy vector for the dataset. For this purpose, we use the correlation coefficient as a measure and dynamic time warping. Thus, it is possible to match the performances of the same chant with high probability. The proposed methodology provides a foundation for building an audio fingerprinting method based on fuzzy entropy.