The **target format** is the format of the result. The **target
precision** is the precision of the target format.

When computing polynomials, it may not be enough to represent the coefficients in the target format (e.g. if you want a single precision result, you may need to store your coefficients as double precision).

I won’t give the algorithms here, but the book has algorithms for adding higher worded numbers. Use Dekker’s or Bailey’s algorithm for adding two double word numbers. The latter is more accurate. If \(|x_{l}|\le2^{-p}|x|\), where \(x_{l}\) is the representation in lower precision, then the relative error is bounded by:

as long as \(p\ge3\).

There is also a Bailey’s algorithm for multiplying a double word number with a floating point number. If \(|x_{l}|\le2^{-p}|x|\), the relative error is bounded by \(2\cdot2^{-2p}\).

If you want to do arithmetic on triple words, use Lauter’s algorithms. The book has some theorems on triple words.

The book gives an example of computing \(\cos\) using an approximation, and then computing a bound on the error within some interval. It’s a nice example, but he points out that doing this analysis in general would be extremely tedious - especially given that polynomials can go to 10 or more degrees. Much of the work on calculating error bounds is done by Gappa.

The book has Maple programs to calculate round to nearest for 32 and 64 bit numbers, as well as programs to compute the ULP.