exponential failure rate at high failure count

#1
Hi to all,

I'm trying to evaluate Monte-Carlo-Simulations under the assumption of an exponential failure rate model.
Given:
N_TEST: number of tests,
N_FAIL: number of failed tests,
T_TEST: time to gather passed/failed results,
CL: confidence level,
I can calculate two types of parameters:

Nominal parameters:
Unavailability at T_TEST: Qnom = N_FAIL/N_TEST
Failure rate : LAMBDAnom = -log(1-Qnom)/T_TEST

Confidence parameters:
Failure rate: LAMDAconf = chi2inv(CL, (2*N_FAIL + 2)) / (2 * N_TEST * T_TEST);
Unavailability at T_TEST: Qconf = 1- exp(-T_TEST * LAMBDAconf);

For small values of N_FAIL this works fine, but if the number of N_FAIL becomes
larger, then LAMBDAnom > LAMBDAconf and Qnom > Qconf.
Example: N_TEST =1000, T_TEST=1000, CL=0.99, N_FAIL > ~260;

This seems to be false for me, since under the application of a confidence intervall,
the results shall be pessimistic, thus Qconf shall be > Qnom, the same for LAMBDA.

Therefore my questions:
- Is my approach generally false?
- Does the confidence approach with chi2inv work for large values of N_FAIL?
- in general
- numerical (I know, this function isn't easy to implement)
- Are there known work-arounds?


Thanks in advance

M.Schäfer