Regression and Monte Carlo Randomisation

#1
Hi all,
I'm currently working with some data in which there are 33 groups of 3 people, with roughly 100,000 datapoints per group. There are two variables from within the group, let's call them A and B, in which A and B are measurements off different people within a group. I perform a regression analysis on these and find that B significantly predicts A, quite strongly. This is fine.

I then randomise the pairings of the groups, such that the A and B variables are measured from two different groups, with the hypothesis that there should be no significance of B predicting A in the regression. This is where the trouble comes in. Each time I run my analysis (it's custom written software), the randomisation is different. When I perform the regressions (they're binary logistic) the effect size is smaller but sometimes significant. On each run the direction is likely to change.

We're thinking that this might be because there's vast data, and I've been told that the correct way to deal with this is to perform a monte carlo randomisation method. Could I check that this is the correct procedure (with a question at the end):

At least 1000 times (say, n times):
perform the randomisation
ensure that the resulting randomised groups aren't one we've seen before
perform the regression
note if the variable is a significant predictor or not
if it is not significant, add 1 to a counter

divide the counter by n to determine if the variable really is significant.

My question is, should I be looking at the significance to add 1 to the counter, or should I be comparing the Exp(B) value of the regression to that of the original, non-randomised regression?

Many Thanks,
Stuart