Complex & Intelligent Systems (Feb 2025)
Incremental data modeling based on neural ordinary differential equations
Abstract
Abstract With the development of data acquisition technology, a large amount of time-series data can be collected. However, handling too much data often leads to a waste of social resources. It becomes significant to determine the minimum data size required for training. In this paper, a framework for neural ordinary differential equations based on incremental learning is discussed, which can enhance learning ability and determine the minimum data size required in data modeling compared to neural ordinary differential equations. This framework continuously updates the neural ordinary differential equations with newly added data while avoiding the addition of extra parameters. Once the preset accuracy is reached, the minimum data size needed for training can be determined. Furthermore, the minimum data size required for five classic models under various sampling rates is discussed. By incorporating new data, it enhances accuracy instead of increasing the depth and width of the neural network. The close integration of data generation and training can significantly reduce the total time required. Theoretical analysis confirms convergence, while numerical results demonstrate that the framework offers superior predictive ability and reduced computation time compared to traditional neural differential equations.
Keywords