my_model |> inspect("parameters_df")Causal models for qualitative and mixed methods inference
Exercise 1: Causality
1 Part 1: Interpreting types
(20 mins)
Look at the “Types handout”.
- Make sure that you can interpret what the types \(M.xx\) mean \(Y.xxxx\).
- Select 4 of the combined types (numbered 1 - 64) and describe:
- Does \(X\) affect \(Y\) for this type?
- Does \(X\) have a direct or indirect effect via \(M\)? Or both? Or neither? Or sometimes??
2 Part 2: Make your own model in CausalQueries
2.1 Define the model
(30 mins)
Select an outcome variable of interest.
Decide on 2–3 causal variables that might influence the outcome (causes of primary theoretical interest, moderators, additional causes in the literature, etc.).
Draw your model by hand – before you do anything in CausalQueries.
Consider also including:
- a key mediating variable that should be represented in your model
- a key moderating variable that should be represented in your model
- any indications of unmeasured confounding between nodes in your model
Be prepared to talk the class through the choices you made in building the model. What assumptions did you intend to build in or leave out, and why, given background knowledge of the domain?
2.2 In code
(10 mins)
Write your model as a causal statement (of the form X->Y).
In CausalQueries, create a model that connects the causal variables to the outcome variable.
Now plot the model (plot_model(model)) and save your code in a .qmd file.
2.3 Refine the model
Inspect the collection of implied types, via
- Select one type for each node (e.g. a type for \(X\), for \(M\), and for \(Y\)) and describe what would actually happen in that case.
- Are there restrictions you would want to impose to any types? What would they be? What types might you want to rule out?
2.4 Plot the model
- plot your model using
plot_model - assess whether there are more intuitive placements for the nodes and adjust using
x_coord,y_coord. Make sure to give coordinates in the order of the model nodes. (Check via:model$nodes).
2.5 Pointers
Keep in mind
- Missing arrows make the strongest statements.
- Are there direct effects you are unintentionally excluding?
- Is there potential confounding that you are excluding?
- You are representing background beliefs about a domain, not just a specific argument.
- There may be causal connections outside a given argument that need representing (e.g., potential confounding, direct effects).
- You do not need all the complexity.
- Possible causes can be left out as long as they do not affect more than one node included on the graph.
- Mediating steps can be left out or collapsed.