# Real Numbers - Why? Why not computable numbers?

Do we have any mathematicians in the house? I've been wondering for a while why math is usually focused around real numbers instead of computable numbers - that is the set of numbers that you can actually be computed to arbitrary, finite precision in finite time. Note that they necessarily include pi, e, sqrt(2) and every number you could ever compute. If you've seen it, it's computable.

What do we lose, beyond cantor's argument, by restricting math to computable numbers? By corollary of binary files and therefore algorithms being countable, the computable numbers are countable too, different from reals.

Bonus points if you can name a real, non-computable number. (My partner replied with "a number gained by randomly sampling decimal places ad infinitum" - a reply as cheeky as the question.) Also bonus points for naming further niceness properties we would get by restricting to computables.

I've read the wikipedia article on computable numbers and a bit beyond.

I'm not a mathematician but I would guess this has something to do with why constructive math isn't used by default? You lose proof techniques like proof by contradiction and the meaning of proofs becomes slightly different, where you prove that an entity can be constructed rather than that an entity exists.

It is also not as practical as you might think. Proving that an algorithm exists to compute a number doesn't mean that computing it is at all practical. It might take more digits than atoms in the universe. So you're limiting proof techniques without much gained.

By the way, on computers, arbitrary-precision math libraries do exist. For example, the Go language has one. But they are considerably slower so they are only used when necessary.

The point of constructive proofs in my view is to provide structure to work from, and build intuition about concepts that are not intuitive in other forms. It is trivial to construct the reals using a technique similar to metric spaces, as mentioned elsewhere in this discussion. The reason reals are used is because once you get to the rationals it's natural to ask what

`x*x = 2`

is.But why reals and not computables then? sqrt(2) is perfectly computable, I don't need to invoke real numbers for that.

From the rest of the discussion, the reason seems to be "because once you get to computables, it's "natural" to ask what the supremum of a specker sequence is", but up until yesterday that question never occured to me.

Maybe it's just the default? If the usual axioms don't prevent non-computable numbers then you'd need some rule forbidding them.

To limit ourselves to computable numbers, it seems like we would need to prove that the number we're considering is computable in each specific case. It would be part of proving a number exists. Seems tedious?

If you see my reply it's not so tedious, but not much is gained. It might be interesting to see how far you can get with computable numbers, but

~~I think they're equivalent to the reals from a measure theory perspective~~seems like this was wrong.I think that constructing the reals is a general consequence of asking how you might be able to approximate the results of

`x*x = 2`

. Consider the sequence where the nth term takes an approximate answer for`x`

up to`n`

digits. This is a Cauchy sequence with a limiting value of`√2`

. If you construct all Cauchy sequences on the rationals and take the equivalence class where the limiting values are equal you get the reals. Some of those Cauchy sequences are certainly equivalent to computing some number where the nth term in the sequence is the nth step of computation of a more accurate answer. It doesn't feel very natural to work in a set of numbers generated from a subset of all Cauchy sequences on the rationals, unless a property of those numbers (in this case computability) is useful. In fact it's trivial to construct all computable numbers by doing what I just suggested, but I don't see what is gained.Edit: I should point out that these are not really Cauchy sequences, but only because we're still working in rationals and haven't defined real numbers and as a result cannot use the standard definition of a metric space. If you make a special version of metric spaces using rationals it's enough to get you to constructing the reals in this way.

I'm curious if there is any interesting relation between these sequences of successive approximations and the table maker's dilemma or is that just about computing digits?

I guess if you cared about not just computability but also algorithmic complexity, you might have trouble with numbers defined as sequences that converge very slowly? It seems like another problem with confusing a mathematical notion like computability with the practical issues of calculation. In some sense we don't really "know" a number if we can't calculate a reasonable approximation of it, but it seems like that has little to do with what you can prove about it using symbol manipulation.

And algorithmic complexity, although useful, is not quite about practical calculation either. Constant terms matter!

Since this is an equivalence class on the limiting value of infinite sequences you most likely will not be able to recover that kind of information. This type of mapping is not bijective. The most that you gain is knowing that there is some notion of "the result of this proof is computable."

The question should be more: why work in the computable numbers? What do you gain?

For the most part computable numbers are fine. They are closed field under addition and multiplication.

You do lose some real analysis tools. The infinum and supermum of certain sequences can be shown to solve the Turing problem and thus are not closed under the computable numbers, for instance.

All real numbers are computable. A real number is defined as the limit of a sequence of rational numbers (there's more to it than that, but that's the gist of it), so you can just follow that sequence until you have as many digits as you need.

That's not quite what that means.

Just because you have a Cauchy sequence of rational numbers doesn't mean you have a practical algorithmic way to determine when you have accurately produced the

nth digit of their limit.And conversely, almost all infinite sequences of digits

cannotbe produced by a machine — by computing. You can get that result from a simple counting argument.I can walk through the construction of a Specker sequence if anyone is interested, but it will take time to type out some background.

From glancing at the wikipedia article for specker sequence, it seems to be mutilating the halting problem to construct an uncomputable limit of a computable sequence. The nth digit converges to 4 only if the nth turing machine applied to input 1 (that what {n}(n) means?) halts in finite steps. So to compute the limit is to solve the halting problem, ergo an uncomputable number.

Boy is it a useless number though.

Well, the halting problem is the very question of computability, so I wouldn't expect a concrete example to be constructed anyhow else.

True, but it feels

weird. In order to compute the next (nth) digit of a number (a operation that needs to finish in finite time in order to be a computable number), I essentially ask "does this turing machine halt given this input?", no sequences needed. That doesn't obfuscate the actual crux of the matter as much imo.So, an uncountable number is just a different manifestation of the halting problem?

I would say you can use the halting problem to construct an uncomputable number. Every component is computable but the result isn't. Similarly how you drop out of the rational numbers if you solve x*x=2.

That doesn't mean there can't be uncomputables that do not require the halting problem. Or maybe some numbers we don't know if they're computable. "The lowest natural number > 5 that is part of a cycle in the collatz conjecture" might be a perfectly alright number, or maybe it isn't because it doesn't exist.

Would you believe that it's been proven that there are far more

uncomputablenumbers than computable numbers?The basic idea is that computer programs (or even just all possible strings in some character set) are

countable- that is, they can be mapped to the integers. There is an infinite number of them, but it'scountablyinfinite. The real numbers can't be counted in this way; there are different kinds of infinities and it's a larger one.What are all these other numbers that have no associated computer program? Basically they are patternless, incompressible random strings of digits. You can't write a finite-length computer program to generate an infinite stream of truly random digits.

It's not really a problem in practice, just fun to think about.

Even more strictly, I wonder rationals are sufficient, because that’s all one can compute in finite time or with finite energy/negentropy.

EDIT: Found a very interesting discussion on stackexchange https://cstheory.stackexchange.com/questions/16512/is-it-possible-to-test-if-a-computable-number-is-rational-or-integer/16515

With only rationals you can't even solve trivial polynomials like x^2 - 2 = 0

Which is, uh, pretty bad

Who cares, since we can never do better (only approximate solutions) in practice anyway? :-)

It comes down to a question of whether the aim is to describe a platonic world, or the model physical phenomena. They are distinctly different, hence the surprise at “the unreasonable effectiveness of mathematics in the natural sciences” https://www.dartmouth.edu/~matc/MathDrama/reading/Wigner.html

In practical contexts, one never really needs the square root of (exactly) 2. One only needs to find the side length of a plot of land (approximately flat and square) whose area is (approximately) 2 sq. units, which itself is often an indirect proxy measure for something else. We never really care about the exact “square root of two”.

There are plenty of uses. For one thing, sqrt(2) isn't an approximation; it's exact. Even if we don't have a finite decimal representation... that doesn't matter? It doesn't make it an approximation.

And if you just go, "well, let's just say sqrt(2) = 1.41 and substitute that in", that, well, breaks everything. In fact, I'm pretty sure that would break every single field axiom. It's not very cool when sqrt(2)^2 != 2

In computing terms, it seems like rational numbers are basically being lazy about division and putting it off to the end, while floating point does approximate division eagerly. You could go further and be lazy about other calculations, and define a number as any algorithm that returns a lazy stream of digits. But this is very slow, and worse, there is no limit on how much deferred calculation you put off that would be required to compute even the first digit. It's easier to figure out performance when common mathematical operations are O(1), as they are when numbers have a finite representation.

To add to this thought, it may be interesting to consider an expression as a CAS would - a tree of functions with inputs. The evaluation would then be the post-order traversal of the tree.