Suppose we want to compute the radix of a floating point system. The code below will do it for you - it works assuming the compiler/interpreter does not perform any optimizations.

A = 1.0 B = 1.0 while (A + 1.0) - A == 1.0: A *= 2.0 B = 1.0 while ((A + B) - A != B): B += 1.0 print(B)

Now why does this work?

Let \(A_{i}\) be the value of `A` after the i-th iteration. Now if
\(2^{i}\le\beta^{p}-1\) then \(A_{i}=2^{i}\). Then
\(A_{i}+1\le\beta^{p}\). Thus, \(o(A_{i}+1)=A_{i}+1\). So as
long as this condition is true, the while loop will be true.

Now let \(j\) be the first iteration where
\(2^{j}\ge\beta^{p}\). Then \(A_{j}=o(2^{j})\). Now we know that
\(\beta\ge2\), so \(\beta^{p}\le
A_{j}<\beta^{p+1}\). So the *successor* of \(A_{j}\) is
\(A_{j}+\beta\) (the exponent is now 1, so the ulp is now
\(\beta\)). Now \(o(A_{j}+1)\) is either \(A_{j}\) or
\(A_{j}+\beta\) depending on the rounding mode. Either way, the
condition becomes false and we exit the first loop.

For the second loop, we keep incrementing B. The condition will become an equality once \(B=\beta\) (giving the first successor to \(A_{j}\)).