There are multiple definitions of unit in the last place. I think most agree when \(x\) is not near a boundary point.

Here is the original definition:

The **unit in the last place** is a function of \(x\) that gives the
gap between the two floating point numbers closest to \(x\) (even
when \(x\) itself is a floating point number.

I will not go over all the definitions presented in the handbook. The one they settle with is:

If \(|x|\in[\beta^{e},\beta^{e+1})\), then \(\textrm{ulp}(x)=\beta^{\max(e,e_{\textit{min}})-p+1}\).

Some properties of this definition:

Let \(X\) be a floating point number. And for this section, assume \(|x|\) is not larger than any representable floating point number.

For binary systems, \(|X-x|<\frac{1}{2}\textrm{ulp}(x)\implies
X=\textrm{RN}(x)\). Note that this is *not* true for all bases! As an
example, assume \(\beta>2\), and let \(x\) be greater than 1,
but closer to 1 than the next floating point number. Let \(X\) be
the floating point number just beneath 1. The inequality will hold, as
\(\ulp(x)\) is larger than \(\ulp(X)\). But \(\RN(x)=1\).

The reason it does work for \(\beta=2\) is not in the book, and I did not formally prove it. My hand-wavy explanation is that the ULP increases by a factor of 2 when you cross a binade boundary. This factor of 2 cancels the \(\frac{1}{2}\).

OK - I just did this formally for \(\beta=2\). If \(x=1\), and \(X=1-2^{-p}\) (the largest floating point value less than 1), then \(|x-X|=\ulp(x)\). But the inequality explicitly disallows equality. Thus, for the inequality to be true, \(X=1\). I take it this will extend to other boundaries of binades.

Also note that we cannot replace \(\textrm{ulp}(x)\) with \(\textrm{ulp}(X)\) in the above expression unless \(X\) is not an integer power of \(\beta\) (for all bases).

The counterexample is when \(x\) is just above \(1-\beta^{-p}\) (the floating point number just before 1). Now \(\textrm{ulp}(1)=\beta^{1-p}\). And it’s easy to show that \(|1-x|<\frac{1}{2}\textrm{ulp}(x)\). But \(\textrm{RN}(x)\ne1\).

The reverse implication holds for any base:

and

The following is also true:

The converse is not always true. The counterexample is \(X=1-\beta^{1-p}\) (the smallest floating point under 1). Let \(x\) be slightly above 1. Then we have \(\textrm{ulp}(x)=\beta^{1-p}\). We also have:

But \(\textrm{RD}(x)=1\) and \(X<1\)

We also have:

And again, the converse is not true.

The quantity \(\textrm{ulp}(1)=\beta^{1-p}\) is called the **machine
epsilon**.

The **unit in the first place** is defined as 0 if \(x=0\) and
\(\beta^{\lfloor\log_{\beta}|x|\rfloor}\) for all other \(x\).
It is the largest power of \(\beta\) that is not larger than
\(|x|\).

I think it is the value when \(M=100\dots00\)

When \(x\) is in the normal range, the relationship between \(\textrm{ufp}\) and \(\textrm{ulp}\) is: