Mathematics (Jul 2022)

Using Value-Based Potentials for Making Approximate Inference on Probabilistic Graphical Models

  • Pedro Bonilla-Nadal,
  • Andrés Cano,
  • Manuel Gómez-Olmedo,
  • Serafín Moral,
  • Ofelia Paula Retamero

DOI
https://doi.org/10.3390/math10142542
Journal volume & issue
Vol. 10, no. 14
p. 2542

Abstract

Read online

The computerization of many everyday tasks generates vast amounts of data, and this has lead to the development of machine-learning methods which are capable of extracting useful information from the data so that the data can be used in future decision-making processes. For a long time now, a number of fields, such as medicine (and all healthcare-related areas) and education, have been particularly interested in obtaining relevant information from this stored data. This interest has resulted in the need to deal with increasingly complex problems which involve many different variables with a high degree of interdependency. This produces models (and in our case probabilistic graphical models) that are difficult to handle and that require very efficient techniques to store and use the information that quantifies the relationships between the problem variables. It has therefore been necessary to develop efficient structures, such as probability trees or value-based potentials, to represent the information. Even so, there are problems that must be treated using approximation since this is the only way that results can be obtained, despite the corresponding loss of information. The aim of this article is to show how the approximation can be performed with value-based potentials. Our experimental work is based on checking the behavior of this approximation technique on several Bayesian networks related to medical problems, and our experiments show that in some cases there are notable savings in memory space with limited information loss.

Keywords