Associate Professor, University of New South Wales

Saddlepoint approximations for densities and tail-area probabilities of certain statistics turn
out to be surprisingly accurate down to very small sample sizes. Although being
asymptotic in spirit (with respect to the sample size, for example) they
sometimes give accurate approximations even down to a sample size of one. Moreover, the error of approximation is
relative rather than absolute and hence they perform very well in the tails
where other competitive methods usually fail. These are the important definite advantages of saddlepoint
approximations in Statistics. I have applied saddlepoint methods to derive the joint density of slope and intercept
in the classical Linear Structural Relationship model.A simulation study supports the good
performance for sample sizes such as 5 or 10. Similar in spirit to the Saddlepoint method is the
**Wiener germ** approximation which I have applied to approximating the non-central chi-square
distribution and its quantiles. Here the approximation is asymptotic with
respect to the degrees of freedom of the distribution.It turns out to be very accurate even down
to one degree of freedom and performs better than any other approximation of
the non-central chi-square that has been suggested in key reference books like
the Johnson and Kotz manual on continuous univariate distributions.

When a saddlepoint approximation is "inverted", it could be used for quantile evaluation.This is an interesting alternative to the
standard **Cornish-Fisher** method for
quantile evaluation. We have suggested an explicit approximation of the
inversion of the saddlepoint approximation for the purpose of quantile
evaluation and have demonstrated numerically its superior performance in
comparison to the Cornish-Fisher method.

(Higher order) **Edgeworth expansions** deliver better approximations to the limiting distribution of a
statistic in comparison to the first order approximation delivered by the
normal distribution. Using these higher order expansions can be beneficial
especially when the sample size is small to moderate. Although these expansions
are well known for some standard situations, they may be non-trivial to derive
for more complicated estimators such as the kernel estimator of the p-th
quantile. We have studied the Edgeworth expansion of the kernel estimator of
the p-th quantile and have demonstrated analytically and numerically the size
and order of the improvement achieved in comparison to the normal
approximation.