Testing variables for confounding

#1
Dear members,
I would appreciate your advise regarding the testing of variables for possible confounding.
I have 12 possible independent variables on ordinal and nominal levels. My dependent variable is functional dependence (mRS≥3). Before i perform regression analyses, i would like to test if there are possible confounders among these 12 variables.

My question is HOW TO test if these 12 variables are possible confounders in a relationship "cognitive functions" ------------->functional dependence (mRS≥3).

Thank you very much for your answer and your time.
 

Miner

TS Contributor
#2
Run a correlation matrix on the levels of the independent variable. A correlation coefficient of 0 is ideal indicating an orthogonal relationship between variables. A correlation coefficient of 1 would indicate perfect confounding between the variables. Correlation coefficients in between indicate partial confounding between variables.
 
#3
Run a correlation matrix on the levels of the independent variable. A correlation coefficient of 0 is ideal indicating an orthogonal relationship between variables. A correlation coefficient of 1 would indicate perfect confounding between the variables. Correlation coefficients in between indicate partial confounding between variables.

I have done the correlations for checking the multicollinearity.
I try to follow Kleinbaum and Klein, Logistic regression, statistics for biology and health. 2010. Chapter 6.
They recommend following steps before building the regression models :

  • Identify possible variables from the literature
  • Check for possible confounders
  • Check for multicollinearity
  • Influential observations – very recommended, it is partly checking for outliers and controlling of there might be some changes in the results because of this. It is also important to check if estimated regression coefficients may change from the coefficients when the persons are retained in the data.
Unfortunately i am stuck on confounder part.

Or maybe i'm making a mistake now?
 

Miner

TS Contributor
#4
I'm coming at confounding from an experimental design perspective where confounding occurs as you fractionate factorial designs. Since the authors are distinguishing between confounding and multicollinearity, they may mean something different by that term such as confounding with a variable that is not part of the experiment. Do they define it in the text?
 

hlsmith

Not a robit
#5
The traditional way to examine for confounders was to add and remove a variable in the model and see if it changed the absolute relationship between the IV of interest and DV by more than 10%. Many people frown on this approach now since scenarios exist where confounding can still exist and the process does not pick-up it up (e.g., faithfulness assumption and if a variable is not a confounder but effect of DV and IV variable(s)). The current approach is to use your understanding of the background study context between variables and if reasonable suspicion exists, then control for the variable (potential confounder) in the model. Though, it should be noted that results from that model are now generalizable to comparable models adjusting for those covariates, so results are model dependent (conditional) - which most people don't acknowledge or forget about.