Computing Sums More Accurately

Posted by Beetle B. on Thu 23 May 2019

Reordering the Operands, and a Bit More

General ideas: Sort all your summands in ascending order (magnitude). Even more complex, sort them, and when you compute each partial sum, place it in its appropriate place in the list.

Most algorithms will be use this method:

  • Remove two numbers from your list, compute \(o(x+y)\), and re-insert this into the list.
  • Repeat till you have one number left - your result.

At most you will have \(n-1\) additions. He showed that if \(T_{i}\) is the result of the i-th addition (note, not necessarily a partial sum), then:

\begin{equation*} \left|s-\sum_{i=1}^{n}x_{i}\right|\le \mathbf{u}\sum_{i=1}^{n-1}|T_{i}| \end{equation*}

So the strategies usually involve trying to minimize \(|T_{i}|\).

Keep in mind that the above is a bound. For example, if all your \(x_{i}\) are of the same sign, ordering the inputs in increasing order minimizes the bound above. But that doesn’t equate to minimizing the error.

If there is a lot of cancelation (due to alternating signs), then sorting in descending order usually gives better results.

Compensated Sums

Kahan’s Algorithm for summation:

  • Let \(s=x_{1}\)
  • Let \(c=0\)
  • For i = 2 to n:
    • \(y=o(x_{i}+c)\)
    • \((s,c)\) = Fast2Sum(\(s,y\))

Although we used Fast2Sum, note that the conditions for using it are not always satisfied. Nevertheless, \(c\) tends to be very close to the error.

The final result satisfies:

\begin{equation*} \left|s-\sum_{i=1}^{n}x_{i}\right|\le(2\mathbf{u}+O(n\mathbf{u}^{2}))\sum_{i=1}^{n}|x_{i}| \end{equation*}

Note that if you have a lot of cancelation, this algorithm may not do well. A counterexample:

  • \(x_{1}=2^{p+1}\)
  • \(x_{2}=2^{p+1}-2\)
  • \(x_{3}=-(2^{p}-1)\)
  • \(x_{4}=-(2^{p}-1)\)
  • \(x_{5}=-(2^{p}-1)\)
  • \(x_{6}=-(2^{p}-1)\)

The exact sum should be 2, but Kahan’s algorithm gives 3.

To deal with this, you can use Priest’s algorithm, which I did not bother to copy. The error bound on that is:

If using Round to Nearest, and sorting in descending order of magnitude, and $n≤βp-3, then

\begin{equation*} \left|s-\sum x_{i}\right|\le2\mathbf{u}\left|\sum x_{i}\right| \end{equation*}

Obviously, this algorithm is slower as you need to sort the inputs.

In Kahan’s algorithm, \(c\) often has a lot smaller magnitude than \(x_{i}\), and adding them together leads to loss of information. This is accounted for in Priest’s algorithm, which is why it is called the double compensated algorithm.

To compensate even more, I think other algorithms will keep accumulating the \(c\) terms (usually of comparable magnitude), and report their sum at the end. Below is the cascaded summation algorithm:

  • \(s=x_{i}\)
  • \(e=0\)
  • for i = 2 to n:
    • \((s,e_{i})\) = 2Sum(\(s,x_{i}\))
    • \(e=\RN(e+e_{i})\)

The bound on this is:

\begin{equation*} \left|s-\sum x_{i}\right|\le \mathbf{u}\left|\sum x_{i}\right|+\gamma_{n-1}^{2}\sum\left|x_{i}\right| \end{equation*}

This requires \(n\mathbf{u}<1\).

The book goes on to discuss the K-fold algorithm for summing vectors. I didn’t bother noting the error bound on that.

To get a sense of errors, here is the result of one experiment:

Method Error (ulps)
Increasing Order 18.9
Decreasing Order 16.9
Kahan’s 6.9
Priest 0.094
Cascaded 0.094

It’s been shown that under certain conditions, an RN-addition algorithm without branching cannot be guaranteed to always return a correctly rounded result when adding 3 or more floating point numbers.

Summation Algorithms That Somehow Imitate a Fixed Point Arithmetic

This section has a survey of methods that involve splitting the terms into multiples of powers of 2. I did not read the details.

On the Sum of Three Floating Point Numbers

This section is about accurately calculating the sum of three floating point numbers using a triple double intermediate format. I did not read the details.