I have a practical problem in the handling of an empirical dataset where normal distribution cannot be assumed.

Let's say my dataset of values, each with a standard deviation, has a distribution that fails tests of equivalence (such as chi square). I wish to express the values as a range. It is common practise to express a range from the minimum to maximum value by choosing the lowest and the highest data points, ignoring the errors on each datum (e.g. 1000+/-100 to 2000+/-50 to be expressed as the range 1000-2000). This seems inappropriate to me, but so does a range that incorporates the standard deviations (e.g. 1000+/-100 to 2000+/-50 to be expressed as the range 900-2050). Instead, shouldn't I consider multiple data in determining the minimum and maximum of a dataset, and use samples from the dataset that yields the lowest and highest mean values that each pass my test of equivalence?

Apologies if the problem is not expresses clearly - I can provide a worked example in Excel if required. Any advice would be greatly appreciated.

Thanks to all!