I think we should step back and explain a little theory here, because newbie doesn't seem to be getting exactly what went on in the mean example.

You should be aware that the mean is simply the sum of each data point divided by the number of points in the data set. Right?

With data from a frequency table, we don't have the exact values. Instead, we know the total count. It is the sum of the frequency in all the bins (= 212). To approximate the values, we take the *midpoint* of each bin to represent the overall value. For instance, a bin of 30 to 40 with frequency 10 might have values 30, 30, 31, 34, 36, 38, 40, 39, 35, 35. We don't know the details. Instead, we simply say the midpoint is 35 and in our ignorance use that to represent *all* the values in the bin. Thus, we would have 35 times 10 as the sum of data points from that class.

In the example you provide and the way Dragan showed how to do it, we had: \(frequency_j \times midpoint_j\), for j different classes. Thus, we were simply calculating the mean as always. So how do we calculate standard deviations? You'll need to create your frequency data set, using the "frequency x midpoint" to generate a frequency-sized data set for each class. That is your input. I'm sure your calculator has some way of doing that work for you, but this is ultimately just arithmetic you should be able to do yourself, without any tools.