Probability of N requests executing on a server at the same time


I would like to find a layman's approach to estimate the probability of N requests executing on a server at the same time. In my model, there are 50 users making 3 requests each (150 total), randomly distributed across a 540-minute work day. Each request spends 1 minute executing on the server. Individual users are limited to making 1 request at a time.

I've estimated the level of concurrency to be 0.1, reflecting the fact that the server will not be executing any requests most of the time (something like 3.6 minutes between requests on average). What I'd like to know, however, is the probability that any 2, 3, or 4 requests will be executing simultaneously.

I've looked at books on queuing theory and capacity planning, but haven't found anything that frames the question in this way. The following forum discussion on Oracle asks the same type of question, but the answer is not clear to me—short of writing a simulator:

Please let me know if you have any resources for me to look at. If you want to give examples, feel free to simplify the problem or make assumptions (but please make sure to explain what they were).

Thanks in advance,