Fortran must die
Gellman suggests putting variables on a roughly similar scale by subtracting the mean and dividing by two standard deviations. Even for binary predictors. I have not seen this done in practice much and wanted to ask opinions of its advantages. He suggests not doing it to predictors with two levels, at least when there are many predictors. I know some object to doing this at all.