Jisuanji kexue yu tansuo (Jun 2023)
Transfer Learning Boosting for Weight Optimization Under Multi-source Domain Distribution
Abstract
The deep decision tree transfer learning Boosting method (DTrBoost) can only adapt to the training data of one source domain and one target domain, and can not adapt to the samples of several different distribution source domains. In addition, the DTrBoost method synchronously learns data from the source domain to the target domain model, and does not quantify the weight of the learned knowledge according to the degree of importance. In practice, the distribution of data divided according to one or some characteristics of a certain dataset is often inconsistent, the importance of these different distributions to the final model is also inconsistent, and the weight of knowledge transfer is therefore not equal. To solve this problem, a transfer learning method of multi-source domain weight optimization is proposed. The main idea is to calculate the KL divergence distance to the target domain according to the source domain space of different distributions, and calculate the learning weight proportion parameters of the source domain samples of different distributions by using the ratio of KL divergence, so as to optimize the overall gradient function and make the learning direction towards the direction of the fastest gradient decline. The gradient descent algorithm can make the model converge quickly, and ensure the learning speed as well as the transfer learning effect. Experimental results show that the algorithm proposed in this paper adaptively achieves better average performance on the whole. The average classification error rate on all the adopted datasets decreases by 0.013 and even 0.030 on OCR dataset.
Keywords