Head-mounted displays are virtual reality devices that may be equipped with sensors and cameras to measure a patient’s heart rate through facial regions. Heart rate is an essential body signal that can be used to remotely monitor users in a variety of situations. There is currently no study that predicts heart rate using only highlighted facial regions; thus, an adaptation is required for beats per minute predictions. Likewise, there are no datasets containing only the eye and lower face regions, necessitating the development of a simulation mechanism. This work aims to remotely estimate heart rate from facial regions that can be captured by the cameras of a head-mounted display using state-of-the-art EVM-CNN and Meta-rPPG techniques. We developed a region of interest extractor to simulate a dataset from a head-mounted display device using stabilizer and video magnification techniques. Then, we combined support vector machine and FaceMash to determine the regions of interest and adapted photoplethysmography and beats per minute signal predictions to work with the other techniques. We observed an improvement of 188.88% for the EVM and 55.93% for the Meta-rPPG. In addition, both models were able to predict heart rate using only facial regions as input. Moreover, the adapted technique Meta-rPPG outperformed the original work, whereas the EVM adaptation produced comparable results for the photoplethysmography signal.