Score-based structure learning from data with missing values
Score-based algorithms in the literature are typically defined to use a generic score function to compare different network structures. However, for the most part network scores assume that data are complete.
The Structural Expectation-Maximization (Structural EM) algorithm
A possible approach to sidestep this limitation is the Structural EM algorithm from Nir Friedman (link), which scores candidate network structures on completed data by iterating over:
- an expectation step (E): in which we complete the data by imputing missing values from a fitted Bayesian network;
- the maximization step (M): in which we learn a Bayesian network by maximizing a network score over the completed data.
This algorithm is implemented in the structural.em() function in bnlearn
(documented here). The arguments of structural.em()
reflect its modular nature:
maximize, the label of a score-based structure learning learning algorithm, andmaximize.args, a list containing its arguments (other than the data);fit, the label of a parameter estimator inbn.fit()(documented here), andfit.args, a list containing its arguments (other than the data);impute, the label an imputation method inimpute()(documented here), andimpute.args, a list containing its arguments (other than the data).
The number of iterations of the E and M steps is controlled by the max.iter argument, which defaults to
5 iterations.
With partially observed variables
Consider some simple MCAR data in which 5% of the values are missing for each variable.
> incomplete.data = learning.test > for (col in seq(ncol(incomplete.data))) + incomplete.data[sample(nrow(incomplete.data), 100), col] = NA
With the default arguments, structural.em() uses hill-climbing as the structure learning algorithm,
maximum likelihood for estimating the parameters of the Bayesian network, and likelihood weighting to impute the
missing values.
> dag = structural.em(incomplete.data) > dag
Bayesian network learned from Missing Data
model:
[A][C][F][B|A][D|A:C][E|B:F]
nodes: 6
arcs: 5
undirected arcs: 0
directed arcs: 5
average markov blanket size: 2.33
average neighbourhood size: 1.67
average branching factor: 0.83
learning algorithm: Structural EM
score-based method: Hill-Climbing
parameter learning method: Maximum Likelihood (disc.)
imputation method:
Posterior Expectation (Likelihood Weighting)
penalization coefficient: 4.258597
tests used in the learning procedure: 148
optimized: TRUE
We can change that using the arguments listed above.
> dag = structural.em(incomplete.data, + maximize = "tabu", maximize.args = list(tabu = 50, max.tabu = 50), + fit = "bayes", fit.args = list(iss = 1), + impute = "exact", max.iter = 3)
> dag
Bayesian network learned from Missing Data
model:
[A][C][F][B|A][D|A:C][E|B:F]
nodes: 6
arcs: 5
undirected arcs: 0
directed arcs: 5
average markov blanket size: 2.33
average neighbourhood size: 1.67
average branching factor: 0.83
learning algorithm: Structural EM
score-based method: Tabu Search
parameter learning method: Bayesian Dirichlet
imputation method: Exact Inference
penalization coefficient: 4.258597
tests used in the learning procedure: 1600
optimized: TRUE
In particular, changing the default impute = "bayes-lw" into impute = "exact" may be
useful because the convergence of the Structural EM is not guaranteed if the imputation is performed using approximate
Monte Carlo inference. However, it is usually much slower.
In addition, we can set the argument return.all to TRUE to have
structural.em() return its complete status at the last iteration: the network structure it has learned,
the completed data it was learned from and the fitted Bayesian network used to perform the imputation.
> info = structural.em(incomplete.data, return.all = TRUE, + maximize = "tabu", maximize.args = list(tabu = 50, max.tabu = 50), + fit = "bayes", fit.args = list(iss = 1), + impute = "exact", max.iter = 3) > names(info)
[1] "dag" "imputed" "fitted"
The network structure is the same as that returned when return.all = FALSE, which is the default.
> info$dag
Bayesian network learned from Missing Data
model:
[A][C][F][B|A][D|A:C][E|B:F]
nodes: 6
arcs: 5
undirected arcs: 0
directed arcs: 5
average markov blanket size: 2.33
average neighbourhood size: 1.67
average branching factor: 0.83
learning algorithm: Structural EM
score-based method: Tabu Search
parameter learning method: Bayesian Dirichlet
imputation method: Exact Inference
penalization coefficient: 4.258597
tests used in the learning procedure: 1600
optimized: TRUE
The completed data are stored in a data frame with the same structure as the original data.
> head(info$imputed)
A B C D E F 1 b c b a b b 2 b a c a b b 3 a a a a a a 4 a a a a b b 5 a a b c a a 6 c c a c c a
The fitted Bayesian network is a bn.fit object.
> info$fitted
Bayesian network parameters
Parameters of node A (multinomial distribution)
Conditional probability table:
a b c
0.3330001 0.3335999 0.3334000
Parameters of node B (multinomial distribution)
Conditional probability table:
A
B a b c
a 0.85635175 0.44721945 0.11761962
b 0.02468642 0.21884782 0.09242969
c 0.11896184 0.33393273 0.78995069
Parameters of node C (multinomial distribution)
Conditional probability table:
a b c
0.74451776 0.20502566 0.05045658
Parameters of node D (multinomial distribution)
Conditional probability table:
, , C = a
A
D a b c
a 0.80858892 0.09078470 0.10242053
b 0.08797008 0.81200547 0.10482031
c 0.10344100 0.09720983 0.79275916
, , C = b
A
D a b c
a 0.17002308 0.88870846 0.24245484
b 0.13320747 0.06733788 0.50903175
c 0.69676946 0.04395366 0.24851341
, , C = c
A
D a b c
a 0.42844562 0.32100457 0.13818027
b 0.21444298 0.39497717 0.44812925
c 0.35711140 0.28401826 0.41369048
Parameters of node E (multinomial distribution)
Conditional probability table:
, , F = a
B
E a b c
a 0.80625202 0.19679592 0.11093100
b 0.09479614 0.18041143 0.11287621
c 0.09895184 0.62279265 0.77619279
, , F = b
B
E a b c
a 0.38797502 0.30198128 0.23085927
b 0.50984599 0.40387546 0.52295823
c 0.10217899 0.29414326 0.24618250
Parameters of node F (multinomial distribution)
Conditional probability table:
a b
0.5071986 0.4928014
With completely unobserved (latent) variables
If the data contain a latent variable which we do not observe for any observation, the E step in the fist iteration
fails because it cannot fit a Bayesian network to impute the missing values. (If all variables are at least partially
observed, structural.em() uses locally complete observations for this purpose. bn.fit() does
the same as illustrated here.)
> incomplete.data[, "A"] = factor(rep(NA, nrow(incomplete.data)), levels = levels(incomplete.data[, "A"])) > structural.em(incomplete.data)
## Warning in check.data(x, allow.levels = TRUE, allow.missing = TRUE, ## warn.if.no.missing = TRUE, : at least one variable in the data has no observed ## values.
## Error: the data contain latent variables, so the 'start' argument must be a 'bn.fit' object.
As the error message suggests, we can side-step this issue by providing a bn.fit object ourselves via
the start argument: it will be used to perform the initial imputation.
> start.dag = empty.graph(names(incomplete.data)) > cptA = matrix(c(0.3336, 0.3340, 0.3324), ncol = 3, dimnames = list(NULL, c("a", "b", "c"))) > cptB = matrix(c(0.4724, 0.1136, 0.4140), ncol = 3, dimnames = list(NULL, c("a", "b", "c"))) > cptC = matrix(c(0.7434, 0.2048, 0.0518), ncol = 3, dimnames = list(NULL, c("a", "b", "c"))) > cptD = matrix(c(0.351, 0.314, 0.335), ncol = 3, dimnames = list(NULL, c("a", "b", "c"))) > cptE = matrix(c(0.3882, 0.2986, 0.3132), ncol = 3, dimnames = list(NULL, c("a", "b", "c"))) > cptF = matrix(c(0.5018, 0.4982), ncol = 2, dimnames = list(NULL, c("a", "b"))) > start = custom.fit(start.dag, list(A = cptA, B = cptB, C = cptC, D = cptD, E = cptE, F = cptF)) > dag = structural.em(incomplete.data, start = start, max.iter = 3)
## Warning in check.data(x, allow.levels = TRUE, allow.missing = TRUE, ## warn.if.no.missing = TRUE, : at least one variable in the data has no observed ## values.
## Warning in check.data(x, allow.missing = TRUE): variable A in the data has ## levels that are not observed in the data.
> dag
Bayesian network learned from Missing Data
model:
[A][B][C][F][D|B:C][E|B:F]
nodes: 6
arcs: 4
undirected arcs: 0
directed arcs: 4
average markov blanket size: 2.00
average neighbourhood size: 1.33
average branching factor: 0.67
learning algorithm: Structural EM
score-based method: Hill-Climbing
parameter learning method: Maximum Likelihood (disc.)
imputation method:
Posterior Expectation (Likelihood Weighting)
penalization coefficient: 4.258597
tests used in the learning procedure: 83
optimized: TRUE
Unfortunately, the latent variable will almost certainly end up as an isolated nodes unless we connect it to at
least some nodes that are partially observed: the noisiness of Monte Carlo inference can easily overwhelm the
dependence relationships we encode in the network in start argument.
> start.dag = model2network("[A][B|A][C][D][E][F]") > cptA = matrix(c(0.3336, 0.3340, 0.3324), ncol = 3, dimnames = list(NULL, c("a", "b", "c"))) > cptB = matrix(c(0.856, 0.025, 0.118, 0.444, 0.221, 0.334, 0.114, 0.094, 0.790), nrow = 3, ncol = 3, + dimnames = list(B = c("a", "b", "c"), A = c("a", "b", "c"))) > start = custom.fit(start.dag, list(A = cptA, B = cptB, C = cptC, D = cptD, E = cptE, F = cptF)) > dag = structural.em(incomplete.data, start = start, max.iter = 3)
## Warning in check.data(x, allow.levels = TRUE, allow.missing = TRUE, ## warn.if.no.missing = TRUE, : at least one variable in the data has no observed ## values.
> dag
Bayesian network learned from Missing Data
model:
[A][C][F][B|A][D|B:C][E|B:F]
nodes: 6
arcs: 5
undirected arcs: 0
directed arcs: 5
average markov blanket size: 2.33
average neighbourhood size: 1.67
average branching factor: 0.83
learning algorithm: Structural EM
score-based method: Hill-Climbing
parameter learning method: Maximum Likelihood (disc.)
imputation method:
Posterior Expectation (Likelihood Weighting)
penalization coefficient: 4.258597
tests used in the learning procedure: 94
optimized: TRUE
Passing a whitelist to the structure learning algorithm is the simplest way to do that.
> start.dag = model2network("[A][B|A][C][D][E][F]") > cptA = matrix(c(0.3336, 0.3340, 0.3324), ncol = 3, dimnames = list(NULL, c("a", "b", "c"))) > cptB = matrix(c(0.856, 0.025, 0.118, 0.444, 0.221, 0.334, 0.114, 0.094, 0.790), nrow = 3, ncol = 3, + dimnames = list(B = c("a", "b", "c"), A = c("a", "b", "c"))) > start = custom.fit(start.dag, list(A = cptA, B = cptB, C = cptC, D = cptD, E = cptE, F = cptF)) > dag = structural.em(incomplete.data, + maximize.args = list(whitelist = data.frame(from = "A", to = "B")), + start = start, max.iter = 3)
## Warning in check.data(x, allow.levels = TRUE, allow.missing = TRUE, ## warn.if.no.missing = TRUE, : at least one variable in the data has no observed ## values.
> dag
Bayesian network learned from Missing Data
model:
[A][C][F][B|A][D|A:C][E|A:F]
nodes: 6
arcs: 5
undirected arcs: 0
directed arcs: 5
average markov blanket size: 2.33
average neighbourhood size: 1.67
average branching factor: 0.83
learning algorithm: Structural EM
score-based method: Hill-Climbing
parameter learning method: Maximum Likelihood (disc.)
imputation method:
Posterior Expectation (Likelihood Weighting)
penalization coefficient: 4.258597
tests used in the learning procedure: 91
optimized: TRUE
Exact inference does not have this issue because it has no stochastic noise: the imputed values are deterministic given the observed values in each observation.
> dag = structural.em(incomplete.data, start = start, max.iter = 3, impute = "exact")
## Warning in check.data(x, allow.levels = TRUE, allow.missing = TRUE, ## warn.if.no.missing = TRUE, : at least one variable in the data has no observed ## values.
> dag
Bayesian network learned from Missing Data
model:
[A][C][F][B|A][D|A:C][E|A:F]
nodes: 6
arcs: 5
undirected arcs: 0
directed arcs: 5
average markov blanket size: 2.33
average neighbourhood size: 1.67
average branching factor: 0.83
learning algorithm: Structural EM
score-based method: Hill-Climbing
parameter learning method: Maximum Likelihood (disc.)
imputation method: Exact Inference
penalization coefficient: 4.258597
tests used in the learning procedure: 94
optimized: TRUE
The Node-Average Likelihood
Another approach is using the node-average likelihood score, originally from Nikolay Balov
(link) and later extended by Tjebbe Bodewes and Marco Scutari
(link). The key idea behind this score is that scoring local distributions
by computing penalized likelihood scores using locally-complete data gives consistency and identifiability as long
as the penalty coefficient is larger than that of BIC. In practice, this means we can plug score = "pnal"
(discrete networks), score = "pnal-g" (Gaussian networks) or score = "pnal-cg" (conditional
Gaussian networks) into any score-based structure learning algorithm and use it without modification. The penalty
coefficient is controlled by the k argument as in BIC and AIC.
> incomplete.data = learning.test > for (col in seq(ncol(incomplete.data))) + incomplete.data[sample(nrow(incomplete.data), 100), col] = NA > dag = hc(incomplete.data, score = "pnal", k = 10) > dag
Bayesian network learned via Score-based methods
model:
[A][C][F][B|A][D|A:C][E|B:F]
nodes: 6
arcs: 5
undirected arcs: 0
directed arcs: 5
average markov blanket size: 2.33
average neighbourhood size: 1.67
average branching factor: 0.83
learning algorithm: Hill-Climbing
score:
Penalized Node-Average Likelihood (disc.)
penalization coefficient: 10
tests used in the learning procedure: 40
optimized: TRUE
The corresponding (unpenalized) log-likelihood scores score = "nal" (discrete networks),
score = "nal-g" (Gaussian networks) and score = "nal-cg" (conditional Gaussian networks)
will always overfit and learn complete graphs like their complete-data equivalents.
> dag = hc(incomplete.data, score = "nal") > dag
Bayesian network learned via Score-based methods
model:
[A][F|A][E|A:F][B|A:E:F][D|A:B:E:F][C|A:B:D:E:F]
nodes: 6
arcs: 15
undirected arcs: 0
directed arcs: 15
average markov blanket size: 5.00
average neighbourhood size: 5.00
average branching factor: 2.50
learning algorithm: Hill-Climbing
score: Node-Average Likelihood (disc.)
tests used in the learning procedure: 120
optimized: TRUE
Fri Aug 1 22:53:22 2025 with bnlearn
5.1
and R version 4.5.0 (2025-04-11).