Estimating standard error by formula vs computing

#1
Dear All,

I am reading a book on statistics by Freedman et al titled 'Statistics'. I have gone through chapters but I got stuck in Standard Error (SE) topic (Chapter 17). The main question is about the solved example of Box Model, to calculate standard error, shown in the same chapter.

What I understood from the book?
The book offers a direct formula to calculate standard error of means in page 291 as
SE = sqrt(number of draws) x (SD of box) ---> Eqn. 1
But by theory, I understand that
SE of something = standard deviation of all the something of different samples ---> Eqn. 2
To be specific
SE of "means" = standard deviation of all the "means" of different samples taken from same population ---> Eqn. 3

Now, if Eqn. 1 is used to calculate SE then, which is well understood as shown in the example in page 292. The SE obtained here is 10.

What I did?
I recreated the box model to generate the box sums, and tried to calculate standard error of means of the same using C++ program. I used the same box model {0, 2, 3, 4, 6} to randomly draw numbers from the box 25 times. So every time the numbers are drawn randomly I calculate their sum also. I enlist such sums 100 times. Now I have a sample which contains 100 sums of the draws (each sum contains sum of 25 random draws from the box). Now if I calculate the standard deviation of the 100 samples it should match with the one that is obtained from Eqn. 1 (which is 10). Unfortunately, it is not matching i.e. both the SE (computed vs formula) are different.

Main issue:
The standard error that I get from the computer program and by the formula are not matching. Can anyone throw light over this? I might be missing something.

I hope I made myself clear. Can you please help in this matter by directing me to proper resources?