summary method for class "causal_model
".
Arguments
- object
An object of
causal_model
class produced usingmake_model
orupdate_model
.- include
A character string specifying the additional objects to include in summary. Defaults to
NULL
. See details for full list of available values.- ...
Further arguments passed to or from other methods.
- x
An object of
summary.causal_model
class, produced usingsummary.causal_model
.- what
A character string specifying the objects summaries to print. Defaults to
NULL
printing causal statement, specification of nodal types and summary of model restrictions. See details for full list of available values.
Value
Returns the object of class summary.causal_model
that preserves the list structure of causal_model
class and adds the following additional objects:
"parents"
a list of parents of all nodes in a model,"parameters"
a vector of 'true' parameters,"parameter_names"
a vector of names of parameters,"data_types"
a list with the all data types consistent with the model; for options see"?get_all_data_types"
,"prior_event_probabilities"
a vector of prior data (event) probabilities given a parameter vector; for options see"?get_event_probabilities"
,"prior_hyperparameters"
a vector of alpha values used to parameterize Dirichlet prior distributions; optionally provide node names to reduce output"inspect(prior_hyperparameters, c('M', 'Y'))"
Details
In addition to the default objects included in `summary.causal_model` users can request additional objects via `include` argument. Note that these additional objects can be large for complex models and can increase computing time. The `include` argument can be a vector of any of the following additional objects:
"parameter_matrix"
A matrix mapping from parameters into causal types,"parameter_mapping"
a matrix mapping from parameters into data types,"causal_types"
A data frame listing causal types and the nodal types that produce them,"prior_distribution"
A data frame of the parameter prior distribution,"ambiguities_matrix"
A matrix mapping from causal types into data types,"type_prior"
A matrix of type probabilities using priors.
print.summary.causal_model
reports causal statement, full specification of nodal types and summary of model restrictions. By specifying `what` argument users can instead print a custom summary of any set of the following objects contained in the `summary.causal_model`:
"statement"
A character string giving the causal statement,"nodes"
A list containing the nodes in the model,"parents"
A list of parents of all nodes in a model,"parents_df"
A data frame listing nodes, whether they are root nodes or not, and the number and names of parents they have,"parameters"
A vector of 'true' parameters,"parameters_df"
A data frame containing parameter information,"parameter_names"
A vector of names of parameters,"parameter_mapping"
A matrix mapping from parameters into data types,"parameter_matrix"
A matrix mapping from parameters into causal types,"causal_types"
A data frame listing causal types and the nodal types that produce them,"nodal_types"
A list with the nodal types of the model,"data_types"
A list with the all data types consistent with the model; for options see `"?get_all_data_types"`,"prior_hyperparameters"
A vector of alpha values used to parameterize Dirichlet prior distributions; optionally provide node names to reduce output `inspect(prior_hyperparameters, c('M', 'Y'))`"prior_distribution"
A data frame of the parameter prior distribution,"prior_event_probabilities"
A vector of data (event) probabilities given a single (sepcified) parameter vector; for options see `"?get_event_probabilities"`,"ambiguities_matrix"
A matrix mapping from causal types into data types,"type_prior"
A matrix of type probabilities using priors,"type_distribution"
A matrix of type probabilities using posteriors,"posterior_distribution"
A data frame of the parameter posterior distribution,"posterior_event_probabilities"
A sample of data (event) probabilities from the posterior,"data"
A data frame with data that was used to update model."stanfit"
A `stanfit` object generated by Stan,"stan_summary"
A `stanfit` summary with updated parameter names,"stan_objects"
A list of Stan outputs that includes `stanfit`, `data`, and, if requested when updating the model, posterior `event_probabilities` and `type_distribution`.
Examples
# \donttest{
model <-
make_model("X -> Y")
model |>
update_model(
keep_event_probabilities = TRUE,
keep_fit = TRUE,
data = make_data(model, n = 100)
) |>
summary()
#>
#> SAMPLING FOR MODEL 'simplexes' NOW (CHAIN 1).
#> Chain 1:
#> Chain 1: Gradient evaluation took 2e-05 seconds
#> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.2 seconds.
#> Chain 1: Adjust your expectations accordingly!
#> Chain 1:
#> Chain 1:
#> Chain 1: Iteration: 1 / 2000 [ 0%] (Warmup)
#> Chain 1: Iteration: 200 / 2000 [ 10%] (Warmup)
#> Chain 1: Iteration: 400 / 2000 [ 20%] (Warmup)
#> Chain 1: Iteration: 600 / 2000 [ 30%] (Warmup)
#> Chain 1: Iteration: 800 / 2000 [ 40%] (Warmup)
#> Chain 1: Iteration: 1000 / 2000 [ 50%] (Warmup)
#> Chain 1: Iteration: 1001 / 2000 [ 50%] (Sampling)
#> Chain 1: Iteration: 1200 / 2000 [ 60%] (Sampling)
#> Chain 1: Iteration: 1400 / 2000 [ 70%] (Sampling)
#> Chain 1: Iteration: 1600 / 2000 [ 80%] (Sampling)
#> Chain 1: Iteration: 1800 / 2000 [ 90%] (Sampling)
#> Chain 1: Iteration: 2000 / 2000 [100%] (Sampling)
#> Chain 1:
#> Chain 1: Elapsed Time: 0.283 seconds (Warm-up)
#> Chain 1: 0.265 seconds (Sampling)
#> Chain 1: 0.548 seconds (Total)
#> Chain 1:
#>
#> SAMPLING FOR MODEL 'simplexes' NOW (CHAIN 2).
#> Chain 2:
#> Chain 2: Gradient evaluation took 1.7e-05 seconds
#> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.17 seconds.
#> Chain 2: Adjust your expectations accordingly!
#> Chain 2:
#> Chain 2:
#> Chain 2: Iteration: 1 / 2000 [ 0%] (Warmup)
#> Chain 2: Iteration: 200 / 2000 [ 10%] (Warmup)
#> Chain 2: Iteration: 400 / 2000 [ 20%] (Warmup)
#> Chain 2: Iteration: 600 / 2000 [ 30%] (Warmup)
#> Chain 2: Iteration: 800 / 2000 [ 40%] (Warmup)
#> Chain 2: Iteration: 1000 / 2000 [ 50%] (Warmup)
#> Chain 2: Iteration: 1001 / 2000 [ 50%] (Sampling)
#> Chain 2: Iteration: 1200 / 2000 [ 60%] (Sampling)
#> Chain 2: Iteration: 1400 / 2000 [ 70%] (Sampling)
#> Chain 2: Iteration: 1600 / 2000 [ 80%] (Sampling)
#> Chain 2: Iteration: 1800 / 2000 [ 90%] (Sampling)
#> Chain 2: Iteration: 2000 / 2000 [100%] (Sampling)
#> Chain 2:
#> Chain 2: Elapsed Time: 0.285 seconds (Warm-up)
#> Chain 2: 0.277 seconds (Sampling)
#> Chain 2: 0.562 seconds (Total)
#> Chain 2:
#>
#> SAMPLING FOR MODEL 'simplexes' NOW (CHAIN 3).
#> Chain 3:
#> Chain 3: Gradient evaluation took 2e-05 seconds
#> Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.2 seconds.
#> Chain 3: Adjust your expectations accordingly!
#> Chain 3:
#> Chain 3:
#> Chain 3: Iteration: 1 / 2000 [ 0%] (Warmup)
#> Chain 3: Iteration: 200 / 2000 [ 10%] (Warmup)
#> Chain 3: Iteration: 400 / 2000 [ 20%] (Warmup)
#> Chain 3: Iteration: 600 / 2000 [ 30%] (Warmup)
#> Chain 3: Iteration: 800 / 2000 [ 40%] (Warmup)
#> Chain 3: Iteration: 1000 / 2000 [ 50%] (Warmup)
#> Chain 3: Iteration: 1001 / 2000 [ 50%] (Sampling)
#> Chain 3: Iteration: 1200 / 2000 [ 60%] (Sampling)
#> Chain 3: Iteration: 1400 / 2000 [ 70%] (Sampling)
#> Chain 3: Iteration: 1600 / 2000 [ 80%] (Sampling)
#> Chain 3: Iteration: 1800 / 2000 [ 90%] (Sampling)
#> Chain 3: Iteration: 2000 / 2000 [100%] (Sampling)
#> Chain 3:
#> Chain 3: Elapsed Time: 0.278 seconds (Warm-up)
#> Chain 3: 0.304 seconds (Sampling)
#> Chain 3: 0.582 seconds (Total)
#> Chain 3:
#>
#> SAMPLING FOR MODEL 'simplexes' NOW (CHAIN 4).
#> Chain 4:
#> Chain 4: Gradient evaluation took 1.6e-05 seconds
#> Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.16 seconds.
#> Chain 4: Adjust your expectations accordingly!
#> Chain 4:
#> Chain 4:
#> Chain 4: Iteration: 1 / 2000 [ 0%] (Warmup)
#> Chain 4: Iteration: 200 / 2000 [ 10%] (Warmup)
#> Chain 4: Iteration: 400 / 2000 [ 20%] (Warmup)
#> Chain 4: Iteration: 600 / 2000 [ 30%] (Warmup)
#> Chain 4: Iteration: 800 / 2000 [ 40%] (Warmup)
#> Chain 4: Iteration: 1000 / 2000 [ 50%] (Warmup)
#> Chain 4: Iteration: 1001 / 2000 [ 50%] (Sampling)
#> Chain 4: Iteration: 1200 / 2000 [ 60%] (Sampling)
#> Chain 4: Iteration: 1400 / 2000 [ 70%] (Sampling)
#> Chain 4: Iteration: 1600 / 2000 [ 80%] (Sampling)
#> Chain 4: Iteration: 1800 / 2000 [ 90%] (Sampling)
#> Chain 4: Iteration: 2000 / 2000 [100%] (Sampling)
#> Chain 4:
#> Chain 4: Elapsed Time: 0.295 seconds (Warm-up)
#> Chain 4: 0.273 seconds (Sampling)
#> Chain 4: 0.568 seconds (Total)
#> Chain 4:
#>
#> Causal statement:
#> X -> Y
#>
#> Nodal types:
#> $X
#> 0 1
#>
#> node position display interpretation
#> 1 X NA X0 X = 0
#> 2 X NA X1 X = 1
#>
#> $Y
#> 00 10 01 11
#>
#> node position display interpretation
#> 1 Y 1 Y[*]* Y | X = 0
#> 2 Y 2 Y*[*] Y | X = 1
#>
#> Number of types by node:
#> X Y
#> 2 4
#>
#> Number of causal types: 8
#>
#> Model has been updated and contains a posterior distribution with
#> 4 chains, each with iter=2000; warmup=1000; thin=1;
#> Use inspect(model, 'stan_summary') to inspect stan summary
#>
#> Note: To pose causal queries of this model use query_model()
#>
# }
# \donttest{
model <-
make_model("X -> Y") |>
update_model(
keep_event_probabilities = TRUE,
keep_fit = TRUE,
data = make_data(model, n = 100)
)
#>
#> SAMPLING FOR MODEL 'simplexes' NOW (CHAIN 1).
#> Chain 1:
#> Chain 1: Gradient evaluation took 2.3e-05 seconds
#> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.23 seconds.
#> Chain 1: Adjust your expectations accordingly!
#> Chain 1:
#> Chain 1:
#> Chain 1: Iteration: 1 / 2000 [ 0%] (Warmup)
#> Chain 1: Iteration: 200 / 2000 [ 10%] (Warmup)
#> Chain 1: Iteration: 400 / 2000 [ 20%] (Warmup)
#> Chain 1: Iteration: 600 / 2000 [ 30%] (Warmup)
#> Chain 1: Iteration: 800 / 2000 [ 40%] (Warmup)
#> Chain 1: Iteration: 1000 / 2000 [ 50%] (Warmup)
#> Chain 1: Iteration: 1001 / 2000 [ 50%] (Sampling)
#> Chain 1: Iteration: 1200 / 2000 [ 60%] (Sampling)
#> Chain 1: Iteration: 1400 / 2000 [ 70%] (Sampling)
#> Chain 1: Iteration: 1600 / 2000 [ 80%] (Sampling)
#> Chain 1: Iteration: 1800 / 2000 [ 90%] (Sampling)
#> Chain 1: Iteration: 2000 / 2000 [100%] (Sampling)
#> Chain 1:
#> Chain 1: Elapsed Time: 0.244 seconds (Warm-up)
#> Chain 1: 0.272 seconds (Sampling)
#> Chain 1: 0.516 seconds (Total)
#> Chain 1:
#>
#> SAMPLING FOR MODEL 'simplexes' NOW (CHAIN 2).
#> Chain 2:
#> Chain 2: Gradient evaluation took 1.9e-05 seconds
#> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.19 seconds.
#> Chain 2: Adjust your expectations accordingly!
#> Chain 2:
#> Chain 2:
#> Chain 2: Iteration: 1 / 2000 [ 0%] (Warmup)
#> Chain 2: Iteration: 200 / 2000 [ 10%] (Warmup)
#> Chain 2: Iteration: 400 / 2000 [ 20%] (Warmup)
#> Chain 2: Iteration: 600 / 2000 [ 30%] (Warmup)
#> Chain 2: Iteration: 800 / 2000 [ 40%] (Warmup)
#> Chain 2: Iteration: 1000 / 2000 [ 50%] (Warmup)
#> Chain 2: Iteration: 1001 / 2000 [ 50%] (Sampling)
#> Chain 2: Iteration: 1200 / 2000 [ 60%] (Sampling)
#> Chain 2: Iteration: 1400 / 2000 [ 70%] (Sampling)
#> Chain 2: Iteration: 1600 / 2000 [ 80%] (Sampling)
#> Chain 2: Iteration: 1800 / 2000 [ 90%] (Sampling)
#> Chain 2: Iteration: 2000 / 2000 [100%] (Sampling)
#> Chain 2:
#> Chain 2: Elapsed Time: 0.212 seconds (Warm-up)
#> Chain 2: 0.237 seconds (Sampling)
#> Chain 2: 0.449 seconds (Total)
#> Chain 2:
#>
#> SAMPLING FOR MODEL 'simplexes' NOW (CHAIN 3).
#> Chain 3:
#> Chain 3: Gradient evaluation took 2.4e-05 seconds
#> Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.24 seconds.
#> Chain 3: Adjust your expectations accordingly!
#> Chain 3:
#> Chain 3:
#> Chain 3: Iteration: 1 / 2000 [ 0%] (Warmup)
#> Chain 3: Iteration: 200 / 2000 [ 10%] (Warmup)
#> Chain 3: Iteration: 400 / 2000 [ 20%] (Warmup)
#> Chain 3: Iteration: 600 / 2000 [ 30%] (Warmup)
#> Chain 3: Iteration: 800 / 2000 [ 40%] (Warmup)
#> Chain 3: Iteration: 1000 / 2000 [ 50%] (Warmup)
#> Chain 3: Iteration: 1001 / 2000 [ 50%] (Sampling)
#> Chain 3: Iteration: 1200 / 2000 [ 60%] (Sampling)
#> Chain 3: Iteration: 1400 / 2000 [ 70%] (Sampling)
#> Chain 3: Iteration: 1600 / 2000 [ 80%] (Sampling)
#> Chain 3: Iteration: 1800 / 2000 [ 90%] (Sampling)
#> Chain 3: Iteration: 2000 / 2000 [100%] (Sampling)
#> Chain 3:
#> Chain 3: Elapsed Time: 0.253 seconds (Warm-up)
#> Chain 3: 0.227 seconds (Sampling)
#> Chain 3: 0.48 seconds (Total)
#> Chain 3:
#>
#> SAMPLING FOR MODEL 'simplexes' NOW (CHAIN 4).
#> Chain 4:
#> Chain 4: Gradient evaluation took 1.6e-05 seconds
#> Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.16 seconds.
#> Chain 4: Adjust your expectations accordingly!
#> Chain 4:
#> Chain 4:
#> Chain 4: Iteration: 1 / 2000 [ 0%] (Warmup)
#> Chain 4: Iteration: 200 / 2000 [ 10%] (Warmup)
#> Chain 4: Iteration: 400 / 2000 [ 20%] (Warmup)
#> Chain 4: Iteration: 600 / 2000 [ 30%] (Warmup)
#> Chain 4: Iteration: 800 / 2000 [ 40%] (Warmup)
#> Chain 4: Iteration: 1000 / 2000 [ 50%] (Warmup)
#> Chain 4: Iteration: 1001 / 2000 [ 50%] (Sampling)
#> Chain 4: Iteration: 1200 / 2000 [ 60%] (Sampling)
#> Chain 4: Iteration: 1400 / 2000 [ 70%] (Sampling)
#> Chain 4: Iteration: 1600 / 2000 [ 80%] (Sampling)
#> Chain 4: Iteration: 1800 / 2000 [ 90%] (Sampling)
#> Chain 4: Iteration: 2000 / 2000 [100%] (Sampling)
#> Chain 4:
#> Chain 4: Elapsed Time: 0.235 seconds (Warm-up)
#> Chain 4: 0.277 seconds (Sampling)
#> Chain 4: 0.512 seconds (Total)
#> Chain 4:
print(summary(model), what = "type_distribution")
#>
#> type_distribution
#> Posterior draws of causal types (transformed parameters):
#>
#> Distributions matrix dimensions are
#> 4000 rows (draws) by 8 cols (causal types)
#>
#> mean sd
#> X0.Y00 0.13 0.05
#> X1.Y00 0.21 0.08
#> X0.Y10 0.07 0.05
#> X1.Y10 0.11 0.07
#> X0.Y01 0.11 0.05
#> X1.Y01 0.17 0.08
#> X0.Y11 0.07 0.05
#> X1.Y11 0.12 0.07
print(summary(model), what = "posterior_distribution")
#>
#> posterior_distribution
#> Summary statistics of model parameters posterior distributions:
#>
#> Distributions matrix dimensions are
#> 4000 rows (draws) by 6 cols (parameters)
#>
#> mean sd
#> X.0 0.39 0.05
#> X.1 0.61 0.05
#> Y.00 0.34 0.13
#> Y.10 0.19 0.12
#> Y.01 0.29 0.13
#> Y.11 0.19 0.12
print(summary(model), what = "posterior_event_probabilities")
#>
#> posterior_event_probabilities
#> Posterior draws of event probabilities (transformed parameters):
#>
#> Distributions matrix dimensions are
#> 4000 rows (draws) by 4 cols (events)
#>
#> mean sd
#> X0Y0 0.24 0.04
#> X1Y0 0.32 0.04
#> X0Y1 0.15 0.03
#> X1Y1 0.29 0.04
print(summary(model), what = "data_types")
#>
#> data_types (Data types):
#> Data frame of all possible data (events) given the model:
#>
#> event X Y
#> X0Y0 X0Y0 0 0
#> X1Y0 X1Y0 1 0
#> X0Y1 X0Y1 0 1
#> X1Y1 X1Y1 1 1
#> Y0 Y0 NA 0
#> Y1 Y1 NA 1
#> X0 X0 0 NA
#> X1 X1 1 NA
#> None None NA NA
print(summary(model), what = "ambiguities_matrix")
#> Warning: Model summary does not contain ambiguities_matrix; to include this object use summary with 'include = 'ambiguities_matrix''
print(summary(model), what = "prior_hyperparameters")
#>
#> prior_hyperparameters
#> Alpha parameter values used for Dirichlet prior distributions:
#>
#> X.0 X.1 Y.00 Y.10 Y.01 Y.11
#> 1 1 1 1 1 1
print(summary(model), what = c("statement", "nodes"))
#>
#> Causal statement:
#> X -> Y
#>
#> Nodes:
#> X, Y
print(summary(model), what = "parameters_df")
#>
#> parameters_df
#> Mapping of model parameters to nodal types:
#>
#> param_names: name of parameter
#> node: name of endogeneous node associated
#> with the parameter
#> gen: partial causal ordering of the
#> parameter's node
#> param_set: parameter groupings forming a simplex
#> given: if model has confounding gives
#> conditioning nodal type
#> param_value: parameter values
#> priors: hyperparameters of the prior
#> Dirichlet distribution
#>
#> param_names node gen param_set nodal_type given param_value priors
#> 1 X.0 X 1 X 0 0.50 1
#> 2 X.1 X 1 X 1 0.50 1
#> 3 Y.00 Y 2 Y 00 0.25 1
#> 4 Y.10 Y 2 Y 10 0.25 1
#> 5 Y.01 Y 2 Y 01 0.25 1
#> 6 Y.11 Y 2 Y 11 0.25 1
print(summary(model), what = "posterior_event_probabilities")
#>
#> posterior_event_probabilities
#> Posterior draws of event probabilities (transformed parameters):
#>
#> Distributions matrix dimensions are
#> 4000 rows (draws) by 4 cols (events)
#>
#> mean sd
#> X0Y0 0.24 0.04
#> X1Y0 0.32 0.04
#> X0Y1 0.15 0.03
#> X1Y1 0.29 0.04
print(summary(model), what = "posterior_distribution")
#>
#> posterior_distribution
#> Summary statistics of model parameters posterior distributions:
#>
#> Distributions matrix dimensions are
#> 4000 rows (draws) by 6 cols (parameters)
#>
#> mean sd
#> X.0 0.39 0.05
#> X.1 0.61 0.05
#> Y.00 0.34 0.13
#> Y.10 0.19 0.12
#> Y.01 0.29 0.13
#> Y.11 0.19 0.12
print(summary(model), what = "data")
#>
#> Data used to update the model:
#>
#> data
#> Data frame dimensions are
#> 100 rows by 2 cols
#>
#>
#> snippet (use grab() to access full 100 x 2 object):
#>
#> X Y
#> 1 0 0
#> 2 0 0
#> 3 0 0
#> 4 0 0
#> 5 0 0
#> 6 0 0
#> 7 0 0
#> 8 0 0
#> 9 0 0
#> 10 0 0
print(summary(model), what = "stanfit")
#>
#> stanfit
#> Stan model summary:
#> Inference for Stan model: simplexes.
#> 4 chains, each with iter=2000; warmup=1000; thin=1;
#> post-warmup draws per chain=1000, total post-warmup draws=4000.
#>
#> mean se_mean sd 2.5% 25% 50% 75% 97.5% n_eff Rhat
#> lambdas[1] 0.39 0.00 0.05 0.30 0.36 0.39 0.42 0.49 1941 1
#> lambdas[2] 0.61 0.00 0.05 0.51 0.58 0.61 0.64 0.70 1941 1
#> lambdas[3] 0.34 0.00 0.13 0.08 0.24 0.35 0.44 0.56 826 1
#> lambdas[4] 0.19 0.00 0.12 0.01 0.09 0.18 0.28 0.42 827 1
#> lambdas[5] 0.29 0.00 0.13 0.04 0.19 0.29 0.38 0.51 842 1
#> lambdas[6] 0.19 0.00 0.12 0.01 0.09 0.19 0.28 0.41 811 1
#> w[1] 0.24 0.00 0.04 0.17 0.22 0.24 0.27 0.33 2250 1
#> w[2] 0.32 0.00 0.04 0.23 0.29 0.32 0.35 0.41 3010 1
#> w[3] 0.15 0.00 0.03 0.09 0.12 0.14 0.17 0.22 2673 1
#> w[4] 0.29 0.00 0.04 0.21 0.26 0.29 0.32 0.38 2897 1
#> types[1] 0.13 0.00 0.05 0.03 0.09 0.13 0.17 0.23 817 1
#> types[2] 0.21 0.00 0.08 0.05 0.15 0.21 0.27 0.35 897 1
#> types[3] 0.07 0.00 0.05 0.00 0.03 0.07 0.11 0.17 870 1
#> types[4] 0.11 0.00 0.07 0.00 0.05 0.11 0.17 0.27 827 1
#> types[5] 0.11 0.00 0.05 0.02 0.07 0.11 0.15 0.21 931 1
#> types[6] 0.17 0.00 0.08 0.03 0.11 0.18 0.23 0.32 830 1
#> types[7] 0.07 0.00 0.05 0.00 0.03 0.07 0.11 0.17 806 1
#> types[8] 0.12 0.00 0.07 0.00 0.05 0.11 0.17 0.26 843 1
#> lp__ -14.55 0.05 1.68 -18.80 -15.40 -14.14 -13.32 -12.44 1002 1
#>
#> Samples were drawn using NUTS(diag_e) at Mon Nov 4 18:00:32 2024.
#> For each parameter, n_eff is a crude measure of effective sample size,
#> and Rhat is the potential scale reduction factor on split chains (at
#> convergence, Rhat=1).
print(summary(model), what = "type_distribution")
#>
#> type_distribution
#> Posterior draws of causal types (transformed parameters):
#>
#> Distributions matrix dimensions are
#> 4000 rows (draws) by 8 cols (causal types)
#>
#> mean sd
#> X0.Y00 0.13 0.05
#> X1.Y00 0.21 0.08
#> X0.Y10 0.07 0.05
#> X1.Y10 0.11 0.07
#> X0.Y01 0.11 0.05
#> X1.Y01 0.17 0.08
#> X0.Y11 0.07 0.05
#> X1.Y11 0.12 0.07
# }