If you satisfy these regularity conditions

http://en.wikipedia.org/wiki/Maximum_likelihood#Asymptotic_normality
then it follows that the MLE \( \hat{\theta} \) jointly follows a multivariate normal distribution asymptotically:

\( \sqrt{n}(\hat{\theta} - \theta_0) \sim \mathcal{N}_p(0, I^{-1}) \)

where \( p \) is the dimension of the vector, \( \theta_0 \) is the true parameter, \( I \) is the information matrix.

Now we invoke another well-known result:

(see e.g. Theorem 7 in

http://www2.econ.iastate.edu/classes/econ671/hallam/documents/QUAD_NORM.pdf)

\( [\sqrt{n}(\hat{\theta} - \theta_0)]^T (I^{-1})^{-1} [\sqrt{n}(\hat{\theta} - \theta_0)]

= n(\hat{\theta} - \theta_0)^T I (\hat{\theta} - \theta_0) \sim \chi^2(p) \)

Once you can provide a consistent estimate of the information matrix \( \hat{I} \), then the following set

\( \{\theta_0: n(\hat{\theta} - \theta_0)^T \hat{I} (\hat{\theta} - \theta_0) \leq \chi^2_{1 - \alpha}(p) \} \)

forms a \( 1 - \alpha \) confidence region for \( \theta_0 \). In high dimension like this it will be a generalization of the ellipsoid.