IEEE Access (Jan 2022)

On the Memory Cost of EMD Algorithm

  • Hsu-Wen Vincent Young,
  • Yu-Chuan Lin,
  • Yung-Hung Wang

DOI
https://doi.org/10.1109/ACCESS.2022.3218417
Journal volume & issue
Vol. 10
pp. 114242 – 114251

Abstract

Read online

Empirical mode decomposition (EMD) and its variants are adaptive algorithms that decompose a time series into a few oscillation components called intrinsic mode functions (IMFs). They are powerful signal processing tools and have been successfully applied in many applications. Previous research shows that EMD is an efficient algorithm with computational complexity $O\left ({n }\right)$ for a given number of IMFs, where $n$ is the signal length, but its memory is as large as $\left ({13+m_{imf} }\right)n$ , where $m_{imf}$ is the number of IMFs. This huge memory requirement hinders many applications of EMD. A physical or physiological oscillation (PO) mode often consists of a single IMF or the sum of several adjacent IMFs. Let $m_{out}$ denote the number of PO modes and, by definition, $m_{Out}\le m_{imf}$ . In this paper, we will propose a low memory cost implementation of EMD and prove that the memory can be optimized to $\left ({2+m_{out} }\right)n$ without aggravating the computational complexity, while gives the same results. Finally, we discuss the optimized memory requirements for different noise-assisted EMD algorithms.

Keywords