Asymptotic Expansions

Posted by Beetle B. on Thu 25 September 2014

A decreasing sequence \(g_{k}(N)\) is called an asymptotic scale if \(g_{k+1}(N)=o(g_{k}(N))\).

\(f(N)\sim c_{0}g_{0}(N)+c_{1}g_{1}(N)+\dots\) is called an asymptotic expansion of \(f\).

Theorem

Assume that a rational generating function \(f(z)/g(z)\) with \(f(z)\) and \(g(z)\) relatively prime and \(g(0)=0\) has a unique pole \(1/\beta\) of smallest modulus and that the multiplicity of \(\beta\) is \(\nu\). Then

\begin{equation*} \left[z^{n}\right]\frac{f(z)}{g(z)}\sim C\beta^{n}n^{\nu-1} \end{equation*}

where

\begin{equation*} C=\nu\frac{\left(-\beta\right)^{\nu}f(1/\beta)}{g^{(\nu)}(1/\beta)} \end{equation*}

A simple example:

\begin{equation*} a_{n}=5a_{n-1}-6a_{n-2},a_{0}=0,a_{1}=1 \end{equation*}

After making the recurrence valid for all \(n\) (using Kronecker delta functions), and then converting to generating functions, we have:

\begin{equation*} A(z)=5zA(z)-6z^{2}A(z)+z=\frac{z}{(1-3z)(1-2z)} \end{equation*}

The pole of the smallest modulus is 1/3. Plugging into the expression above, we get that \(a_{n}\sim 3^{n}\)

Well Known Expansions

\begin{equation*} e^{x}=1+x+\frac{x^{2}}{2!}+\frac{x^{3}}{3!} + O(x^{4}) \end{equation*}
\begin{equation*} \ln(1+x)=x-\frac{x^{2}}{2}+\frac{x^{3}}{3} + O(x^{4}) \end{equation*}
\begin{equation*} \left(1+x\right)^{k}=1+kx+\dbinom{k}{2}x^{2}+\dbinom{k}{3}x^{3}+O(x^{4}) \end{equation*}
\begin{equation*} \frac{1}{1-x}=1+x+x^{2}+x^{3}+O(x^{4}) \end{equation*}
\begin{equation*} H_{N}=\ln N+\gamma+\frac{1}{2N}-\frac{1}{12N^{2}}+O\left(\frac{1}{N^{4}}\right) \end{equation*}
\begin{equation*} \ln N!=N\ln N-N+\ln\sqrt{2\pi N}+O\left(\frac{1}{N}\right) \end{equation*}

These are more accurate as \(x\) approaches 0. Normally we’re interested as \(N\) approaches \(\infty\). We can simply substitute \(x=1/N\).

Techniques

Simplification

Simply discard unneeded terms. If you have \(O(1/N)\), then ignore all \(1/N^{2}\) terms.

Substitution

In a known expansion, replace \(x\) with something appropriate.

Factoring

Estimate the leading term, factor it out, and expand the rest:

\begin{equation*} \ln(N^{2}+N)=\ln(N^{2}(1+1/N))=2\ln N+\ln(1+1/N) \end{equation*}

Now you can expand the last term…

Multiplication

Do term by term multiplication to some desired accuracy, then combine and simplify.

\begin{equation*} \left(H_{N}\right)^{2}=\left(\ln N+\gamma+O\left(\frac{1}{N}\right)\right)\left(\ln N+\gamma+O\left(\frac{1}{N}\right)\right) \end{equation*}

Division

Expand, then factor the denominator, expand \(1/(1-x)\), then multiply.

\begin{equation*} \frac{H_{N}}{\ln\left(N+1\right)}=\frac{\ln N+\gamma+O\left(\frac{1}{N}\right)}{\ln N+O\left(\frac{1}{N}\right)} \end{equation*}

Composition

\begin{equation*} e^{H_{N}} \end{equation*}

Expand \(H_{N}\) and then apply the Taylor Series.

Exp/Log Trick

Sometimes it’s convenient to convert a function as follows:

\begin{equation*} f(x)=\exp(\ln(f(x))) \end{equation*}

Then expand the \(\ln\) function, and then use Taylor on the exponential.

Asymptotics of Finite Sums

Sometimes we need an approximation for \(\sum_{k=0}^{k=N}f(k)\) for large \(N\).

Bounding the Tail

If the series is rapidly decreasing, make the sum go to infinity:

\begin{equation*} N!\sum_{k=0}^{k=N}\frac{\left(-1\right)^{k}}{k!}=N!e^{-1}-R_{N} \end{equation*}

where

\begin{equation*} R_{N}=N!\sum_{k>N}\frac{\left(-1\right)^{k}}{k!} \end{equation*}

By looking at the first few terms of \(R_{N}\), we can immediately see that it is bounded by \(O(1/N)\).

Using The Tail

If the series is rapidly increasing, the last term may dominate.

\begin{equation*} \sum_{k=0}^{k=N}k!=N!\left(1+\frac{1}{N}+\sum_{k=0}^{N-2}\frac{k!}{N!}\right) \end{equation*}

By looking at the last terms in the summation, we can see that the summation is bounded by \(O(1/N^{2})\).

Approximating with an Integral

Just convert the sum to an integral. Try to find some justification, though…

Euler-Maclaurin Summation

Let \(f\) be defined on \([1,\infty)\). Let its derivatives exist and let it be absolutely integrable. Then:

\begin{equation*} \sum_{k=1}^{N}f(k)=\int_{1}^{N}f(x)dx+\frac{1}{2}f(N)+C_{f}+\frac{1}{12}f'(N)-\frac{1}{720}f'''(N)+\dots \end{equation*}

This series diverges, so take care with how many terms you take.

Bivariate Asymptotics

The function may have two variables (usually \(N\) and \(k\)). How you expand may depend on the relative values of each.

Some well known ones:

Normal

\begin{equation*} \dbinom{2N}{N-k}\sim \frac{e^{-k^{2}/N}}{\sqrt{\pi N}}+O\left(\frac{1}{N^{3/2}}\right) \end{equation*}

This holds regardless of \(k\).

\begin{equation*} \dbinom{2N}{N-k}\sim \frac{e^{-k^{2}/N}}{\sqrt{\pi N}}+\left(1+O\left(\frac{1}{N}\right)+O\left(\frac{k^{4}}{N^{3}}\right)\right) \end{equation*}

This holds when \(k\) is near the “center”.

Poisson

\begin{equation*} \dbinom{N}{k}\left(\frac{\lambda}{N}\right)^{k}\left(1-\frac{\lambda}{N}\right)^{N-k}\sim\frac{\lambda^{k}e^{-\lambda}}{k!}+o(1) \end{equation*}

Holds for all/most \(k\).

\begin{equation*} \dbinom{N}{k}\left(\frac{\lambda}{N}\right)^{k}\left(1-\frac{\lambda}{N}\right)^{N-k}\sim\frac{\lambda^{k}e^{-\lambda}}{k!}\left(1+O\left(\frac{1}{N}\right)+O\left(\frac{k}{N}\right)\right) \end{equation*}

This holds when \(k\) is near the “center”.

Q

\begin{equation*} \frac{N!}{(N-k)!N^{k}}\sim e^{-k^{2}/(2N)}+O\left(\frac{1}{\sqrt{N}}\right) \end{equation*}

Holds for all/most \(k\).

\begin{equation*} \frac{N!}{(N-k)!N^{k}}\sim e^{-k^{2}/(2N)}\left(1+O\left(\frac{k}{N}\right)+O\left(\frac{k^{3}}{N^{2}}\right)\right) \end{equation*}

This holds when \(k\) is near the “center”.

Estimating Sums with Bivariate Estimates

If you have a sum of a bivariate function, you may need to utilize different estimates for different portions of the sum.

Laplace Method

Laplace’s method involves approximating the summand with a continuous function at the region(s) where the summand is greatest. For the rest of the range, bound the sum with the tail of your function. Then integrate your function.

It does take some skill in picking the range that you define to be the tail. You need to pick a value where you’re fairly sure it is bounded above by your continuous function.