Integers have typically a fixed size in computers, for example 32 or 64 bits. Since there are no such limits on the size of integers in mathematics, the question arises: how do computer integers work, with respect to mathematical theory?

The main problem with having a finite number of bits is that we have a finite number of integers, which means that if you keep adding 1s the sequence you get 1,2,3,4,… at some point starts to repeat. Still we’d like to define addition and multiplication so that all the familiar rules still hold: we expect that for every a, b, c we have a+(b+c)=(a+b)+c, a*(b*c)=(a*b)*c, a*(b+c)=a*b+a*c, and so on. Mathematical objects that do this are called rings.

A useful way to define a finite ring of N elements is to take the integers modulo N. Formally this works by defining an equivalence relation for integers, where *a* and *b* are equivalent when their difference is a multiple of N:

*a* ~ *b* if and only if *a* – *b* ≡ 0 (mod N).

For example, for N = 12, 7 ~ 19 because 7-19 = -12 ≡ 0 (mod 12).

It turns out that this equivalence partitions ℤ into N subsets: those that are multiples of *N*, those that are a multiple of *N* plus 1, and so on. These subsets, also called *congruence classes*, will form the elements of the ring we’re defining. Now we have to define + and * on them in a meaningful way.

Let’s denote the congruence class of *a* by [*a*]. We have:

[*a*] = {*b* : *a* ~ *b*} = {*a* + *k*N : *k* ∈ ℤ}. Note that [*a*] = [*a* + *k*N] for every integer *k*.

For example, for N = 12, [-5] = [7] = [19] = {…, -5, 7, 19, 31, …}.

It turns out we can define addition and multiplication on these classes based on the arithmetic of integers, and everything checks out:

[*a*] + [*b*] = [*a* + *b*], and [*a*]*[*b*] = [*ab*]. Since [*a*]=[*a*+N], these operations “wrap around” after N elements.

For example, for N = 12, [7]+[6] = [13] = [1]. [5]*[8] = [40] = [4].

To represent classes we can pick a number from each class: [*a*] is represented by an arbitrarily chosen *a*. To encode classes on a computer we pick *a* from the set {0, 1, …, N-1}, stored in which ever way is appropriate for the machine. This gives us the **unsigned integers**. For example with N = 2^{32} we have [2948326455] + [1346640853] = [4294967308] = [12], so a computer working with 32-bit integers calculates 2948326455 + 1346640853 = 12. I’ll be using N = 2^{32} in all of the following examples.

But what about **signed integers**? Since -[*a*] = [-*a*] = [N-*a*], negative integers close to 0 will be stored as large positive numbers. For example we have -[1] = [4294967295], so a computer would store -1 as 4294967295.

Because the arithmetic is defined on congruence classes rather than integers immediately we see that the range of integers we represent on a computer can be chosen arbitrarily. Commonly we choose to work with numbers [0, N-1] or [-N/2, N/2-1], which gives us the unsigned and signed integers, respectively, but we might as well choose to work with any other range that suits our needs, like [-100, N-101], as long as it contains a representative of each of the N classes. As far as the ring operations of addition, subtraction and multiplication are concerned it really doesn’t matter. (Operations not generally defined on rings, like division or order, are a different matter.)

The benefits of this theoretical approach is that we can immediately apply well known results from mathematics. For example, every integer in {1, 2, …, N-1} has a multiplicative inverse modulo N provided it has no common factors with N. For our computers with N = 2^{m} this means division by one odd integer can be performed as multiplication by another. For example, to divide by 3 you can multiply by -1431655765: -1431655765*15 = 5.

Notice that we haven’t made any assumptions of how computers actually store numbers as bits, or of how signed integers could be encoded. We only assumed that the number of bits available is finite and that addition and multiplication should be defined in a sane way. In upcoming posts I’ll discuss how this approach links to the common 2’s complement representation of signed integers, and how the theory behind arbitrary sized integers (bignums) could be explained.

## 3 Trackbacks

[…] « Mathematical foundations of computer integers […]

[…] that m-bit integers are integers modulo 2m. This means that two m-bit integers are equal if their difference is a multiple of 2m; for example […]

[…] fixed sized integers, and then the division doesn’t return the correct value because integer division is not properly defined modulo a power of two. If you use floating point types you’ll have a different set of […]