I was about to cite the same passage from Allison's "logistic regression using SAS" about the problems with the hosmer and lemeshow test. Also, I've heard that imputing the mean for missing data is not a good practice and that imputing values based on the data you do have can be better.

I'm not sure this will help (I'm learning about logistic regression myself) but maybe try changing your sample from 1% with the event and 99% non-event to a more even break. So you'd keep the 3,500 cases that have the event then pick a random 3,500 cases from the remaining 346,500. According to "Logistic regression using SAS", the intercept will be off but the slope coefficients will be unbiased estimates of the slopes in the full population.

Also, I didn't fully understand the sensitivity question. Sensitivity of <100% means that the model you have did not predict 100% of the events correctly for a given cutpoint. Say your cutpoint was 50% probability. If every single time your model predicts 50% or greater chance of the event happening, the event actually happens, then you'd have 100% sensitivity. If your cutpoint is very low, say zero, then you'll get 100% sensitivity even if your model is terrible (in which case your selectivity would be near zero, since you didn't predict non-events well). I think you wrote that your cutpoint was zero and you didn't get 100% sensitivity. If that's what happened, I don't know how that's possible. Maybe the problem is clear enough now for someone else to answer ...?