Lately we’ve discussed how integers work in computers when their size is limited to a fixed number of bits. What if there is no such limitation and we can use as many bits as we want?

As computers get more and more powerful we’d like to derive this case by looking at what happens when we add more and more bits to the binary representation. For example, if you use 2 bits, -1 = 11_{2} because 1+11_{2} = 100_{2}, and 100_{2} ≡ 0 (mod 2²). When you move to 3 bits and more you start to see a pattern emerge:

m=3: -1 = 111 because 111 + 1 = 1000 ≡ 0 (mod 2³) m=4: -1 = 1111 because 1111 + 1 = 10000 ≡ 0 (mod 2⁴) m=5: -1 = 11111 because 11111 + 1 = 100000 ≡ 0 (mod 2⁵) m=6: -1 = 111111 because 111111 + 1 = 1000000 ≡ 0 (mod 2⁶)

So, if we have an infinite number of bits, would -1 be …111111_{2}? Can a number that extends infinitely to the left, *an infinitely large number*, have any meaning?

It turns out that, given the right definitions, it can.

### Introduction

Recall that m-bit integers are integers modulo 2^{m}. This means that two m-bit integers are equal if their difference is a multiple of 2^{m}; for example -1 is 2^{m}-1. When we use m+1 bits this changes: the numbers are equal if their difference is 2^{m+1}. To make the jump from m bits to m+1 bits we define a new concept of distance that makes two numbers “close” if their difference is a high power of two: the greater the power the closer they are. In fact, if the difference is 2^{k}, we make the distance 2^{-k}. This distance makes m-bit numbers approximations to m+1 bit numbers: 1111_{2} is a good approximation to -1 because their distance is 2^{-5} = 1/32. Adding one bit, 11111_{2} is even better: its distance to -1 is 2^{-6} = 1/64, twice as close.

It turns out that this way of defining a distance between two numbers, though it sounds strange at first, fulfills all the usual requirements we ask of distance functions (metrics in mathematics). Having a metric means it becomes meaningful to talk about an extension of integers where numbers like …111111_{2} arise as limits to sequences such as (1_{2},11_{2},111_{2},1111_{2},…)^{(*)}. We call this extended number set the **2-adic integers**^{(*)}.

### Arithmetic

Arithmetic with these “infinite” numbers works pretty much like you would expect. You add and multiply finite integers just like before. To add infinite integers you use the exact same methods, but since there’s an infinite number of digits you can never finish, but you can calculate the result up to any number of digits you like. If this seems objectionable, consider a more familiar case: if you calculate 1/3 in decimal numbers you get 0.333333…: you never finish, but you can calculate as many digits as you like.

For reference, calculating the sum of two 2-adic is simple:

\[

\sum_{k=0}^\infty a_k 2^k + \sum_{k=0}^\infty b_k 2^k = \sum_{k=0}^\infty (a_k+b_k) 2^k

\]

Calculating the product is slightly more complicated since you have to collect the powers of two:

\[

\left(\sum_{i=0}^\infty a_i 2^i \right)\left(\sum_{i=0}^\infty b_i 2^i\right)

= \sum_{k=0}^\infty \left(\sum_{i+j=k} a_i b_j \right) 2^k

\]

### Example

To demonstrate, Let’s calculate the sum of …111101_{2} and 3 = 11_{2}, and the product of …0101011_{2} and = 3:

...111111111111111111111101 ...10101010101010101011 + ...000000000000000000000011 × ...00000000000000000011 = ...000000000000000000000000 = ...10101010101010101011 = 0 + ...01010101010101010110 = ...00000000000000000001 = 1

It turns out that the sum is 0, so …111101_{2} = -3. Also, the product is 1, so in a sense …0101011_{2} = 1/3.

### Representation of negative integers

In the example we found a way to represent -3 using binary digits. Let’s see if we can generalize this.

Suppose x is a positive integer and we want to find a representation for -x. One way to approach this problem is using the identity …111111_{2} = -1 from our introduction: -x = -1-(x-1) = (…111111_{2}-x)+1. The subtraction is easy to calculate because all the bits in …111111_{2} are 1: there is never need to borrow.

In fact, you can see …111111-x as the bitwise complement operation “~x” which returns x with all of its bits flipped: where x has a 1, …111111-x has 1-1 = 0, and where x has a 0, …111111-x has 1-0 = 1. Therefore the 2’s complement rule extends to integers in general: again we have -x = ~x+1.

For example, 1234 = 10011010010_{2}, so -1234 is …111101100101110_{2}:

-1234 = ...111111111111111 To verify: -...000010011010010 + 1 ...111101100101110 = ...111101100101101 + 1 + 10011010010 = ...111101100101110 = 0

### Practical applications

Suppose you are to create a program that requires arbitrarily large integers, and for whatever reason don’t want to use an existing “bigint” implementation, such as `BigInteger`

in Java or .Net. Like the people that design computer hardware, you have to decide how to represent signed integers. I feel that the 2-adic representation is optimal for this purpose because it extends the well known 2’s complement method in a natural way, and because approaching a problem from a larger perspective often permits you to use more powerful tools, even if the tools make no sense in the problem domain.

This is a common pattern in mathematics: to solve a problem you may have to express it in a more general setting (for example, generalized Fourier transforms), and then you come a full circle when applying the results in the original setting.

One example of this is the calculation of multiplicative inverses. Remember how in m-bit integers every odd number has a multiplicative inverse: for each integer x there is an integer x^{-1} that fulfills x^{-1}x = 1. Working with m bits, that is, in integers modulo 2^{m}, the inverse can be calculated using the extended Euclidean algorithm, but if we consider the larger setting of 2-adic integers, a result called Hensel’s Lemma implies the inverse can be calculated using Newton’s method much faster than the Euclidean algorithm.

#### Footnotes

- This is the same process we use to define the real numbers using sequences of rational numbers, except that in this case the numbers we construct extend infinitely to the left rather than to the right.
- You can do this derivation in any base p, giving the
*p-adic integers*. For everything to work correctly p has to be a prime number.

## One Trackback

[…] Notice that we haven’t made any assumptions of how computers actually store numbers as bits, or of how signed integers could be encoded. We only assumed that the number of bits available is finite and that addition and multiplication should be defined in a sane way. In upcoming posts I’ll discuss how this approach links to the common 2′s complement representation of signed integers, and how the theory behind arbitrary sized integers (bignums) could be explained. […]