Parameter learning from data with missing values

Parameter estimators for complete data

Most approaches to parameter learning assume that local distributions are independent from each other in order to estimate their parameters individually. In other words, the parameter estimates for any local distribution are computed independently from the parameter estimates for other local distributions. As a result, estimating the parameters of a local distribution only requires the data observed for the variables involved in that specific local distribution (the node and its parents).

This allows bn.fit() to handle missing values transparently by only using locally complete observations for each local distribution. For instance, in the case of discrete Bayesian networks we have:

> dag = model2network("[A][C][F][B|A][D|A:C][E|B:F]")
> missing = matrix(FALSE, 4000, ncol(learning.test))
> missing[sample(length(missing), 1000)] = TRUE
> incomplete = learning.test[1:4000, ]
> incomplete[missing] = NA
> fitted = bn.fit(dag, incomplete)

To estimate the conditional probability table of B given A,

> fitted$B

  Parameters of node B (multinomial distribution)

Conditional probability table:
 
   A
B            a          b          c
  a 0.85501243 0.44665605 0.12171053
  b 0.02485501 0.22531847 0.09457237
  c 0.12013256 0.32802548 0.78371711

the relevant data can be summarized as:

> table(incomplete[, c("B", "A")], useNA = "always")
      A
B         a    b    c <NA>
  a    1032  561  148   85
  b      30  283  115   19
  c     145  412  953   65
  <NA>   55   43   50    4

and the maximum likelihood estimates of the conditional probabilities are computed from the counts arising from the observations in which both A and B are observed.

> prop.table(table(incomplete[, c("B", "A")], useNA = "no"), "A")
   A
B            a          b          c
  a 0.85501243 0.44665605 0.12171053
  b 0.02485501 0.22531847 0.09457237
  c 0.12013256 0.32802548 0.78371711

Bayesian estimates of conditional probabilities are computed in the same way, but involving a prior distribution.

As a second example, in the case of Gaussian Bayesian networks we have:

> dagG = model2network("[A][B][E][G][C|A:B][D|B][F|A:D:E:G]")
> missing = matrix(FALSE, nrow(gaussian.test), ncol(gaussian.test))
> missing[sample(length(missing), 1000)] = TRUE
> incompleteG = gaussian.test
> incompleteG[missing] = NA
> fitted = bn.fit(dagG, incompleteG)

and the maximum likelihood estimates of the regression coefficients in the local distribution of C against A and B are identical to those produced by lm() when na.action = na.omit.

> fitted$C

  Parameters of node C (Gaussian distribution)

Conditional density: C | A + B
Coefficients:
(Intercept)            A            B  
   2.003179     1.994596     1.999359  
Standard deviation of the residuals: 0.5088955
> lm(C ~ A + B, data = incompleteG, na.action = na.omit)

Call:
lm(formula = C ~ A + B, data = incompleteG, na.action = na.omit)

Coefficients:
(Intercept)            A            B  
      2.003        1.995        1.999

The Expectation-Maximization (EM) algorithm

On the other hand, the Expectation-Maximization (EM) algorithm is explicitly designed to incorporate incomplete data into parameter estimates. In its hard-EM form, the steps are implemented as follows:

  • missing data are imputed in the expectation step using the current parameter estimates;
  • parameters are estimated in the maximization step using the current completed data.

These two steps are reflected in the EM implementations for discrete (method = "hard-em"), Gaussian (method = "hard-em-g") and conditional Gaussian networks (method = "hard-em-cg"). Their optional arguments impute and fit control the methods used in the expectation and maximization steps:

> fitted = bn.fit(dag, incomplete, method = "hard-em", impute = "bayes-lw", fit = "bayes")
> fitted$B

  Parameters of node B (multinomial distribution)

Conditional probability table:
 
   A
B            a          b          c
  a 0.86152176 0.45410868 0.11969241
  b 0.02263426 0.22001325 0.09336342
  c 0.11584398 0.32587806 0.78694417

Optional arguments can be passed to either method using the impute.args and fit.args arguments; if none are provided, default values from impute() (documented here) and bn.fit will be used.

> fitted = bn.fit(dag, incomplete, method = "hard-em",
+            impute = "bayes-lw", impute.args = list(n = 1000),
+            fit = "bayes", fit.args = list(iss = 2))
> fitted$B

  Parameters of node B (multinomial distribution)

Conditional probability table:
 
   A
B            a          b          c
  a 0.86042817 0.45407867 0.11881519
  b 0.02274628 0.22078675 0.09328327
  c 0.11682556 0.32513458 0.78790154

The number of iterations of the EM algorithm are controlled by the max.iter argument (defaulting to 5 iterations) and by the threshold argument (defaulting to a minimum increase in the relative likelihood by 0.001, after dividing by the sample size).

> flat.prior = bn.fit(dag, incomplete[1:5, ], method = "bayes", iss = 10^4)
> fitted = bn.fit(dag, incomplete, method = "hard-em", start = flat.prior, max.iter = 1, debug = TRUE)
* expectation-maximization iteration 1 .
  > the relative difference in the parameters is: 7.443652 .
  > the relative difference in log-likelihood is: 1 .
> fitted = bn.fit(dag, incomplete, method = "hard-em", start = flat.prior, threshold = 1e-10, debug = TRUE)
* expectation-maximization iteration 1 .
  > the relative difference in the parameters is: 7.328336 .
  > the relative difference in log-likelihood is: 1 .
* expectation-maximization iteration 2 .
  > the relative difference in the parameters is: 1.054906 .
  > the relative difference in log-likelihood is: 0.06426866 .
* expectation-maximization iteration 3 .
  > the relative difference in the parameters is: 0.06677718 .
  > the relative difference in log-likelihood is: 0.0003385997 .
  @ the difference in log-likelihood is smaller than the threshold, stopping.

As suggested in the book by Koller & Friedman, convergence should be assessed by computing the likelihood of a data set other than that used to estimate the parameters to avoid overfitting. This is possible with the newdata argument.

> fitted = bn.fit(dag, incomplete, method = "hard-em", newdata = learning.test[4001:5000, ])

Furthermore, Koller & Friedman suggest to initialize the EM algorithm with different parameter values to avoid converging to a local maximum. The start argument can be used to pass a bn.fit object that will be used to perform the initial imputation and to compute the initial value of the log-likelihood. Note that this bn.fit object can encode a network with a different structure than the network we are estimating the parameters for.

> start.dag = pdag2dag(chow.liu(learning.test), ordering = names(learning.test))
> start.bn = bn.fit(start.dag, learning.test)
> fitted = bn.fit(dag, incomplete, method = "hard-em", start = start.bn)

The start argument is also required to initialize EM when at least one of the variables in the data is latent, that is, when it takes value NA for all observations. By default, the starting network would be set to a bn.fit object learned from locally-complete data, but there are no available complete data in that case.

Last updated on Sat Feb 17 23:41:42 2024 with bnlearn 5.0-20240208 and R version 4.3.2 (2023-10-31).