Dealing with high pareto k values

I'm using the new R package 'UBMS' which uses STAN within a Bayesian framework to model occupancy and species distribution. My current 'best' model based on LOOIC values is the following; <- stan_occuRN(~v.dens ~v.type + NDVI + lat+ village +
mainroad + (1|site), data=UFO3, chains=3, iter=500)

The covariates are transformed to improve normality and z standardised. The problem I am having is that the inclusion of (1|site) as a random effect in the model, or any other model fitted to he same data returns an warning that Pareto k values are too high.

Models without the random effect have acceptable Pareto k values but including it in the model is important to account for lack of spatial independence between the sites. From what I have read the inclusion of a random effect can cause high k values as there isn't enough information per observation.

My questions are
  1. How concerned should I be about the high Pareto k values? What can I do to improve them
  2. Was transforming the covariates the right thing to do? They were very skewed before transformation
  3. The error margins for the estimates are very high, is this to do with transforming the covariates? Is there anything I can do to reduce the errors?