# Ackermann schluesseldienst

## Ackermann function

Quickly-growing function

This article is about the mathematical function. For other uses, see Ackermann (disambiguation).

In computability theory, the **Ackermann function**, named after Wilhelm Ackermann, is one of the simplest and earliest-discovered examples of a totalcomputable function that is not primitive recursive. All primitive recursive functions are total and computable, but the Ackermann function illustrates that not all total computable functions are primitive recursive.

After Ackermann's publication of his function (which had three non-negative integer arguments), many authors modified it to suit various purposes, so that today "the Ackermann function" may refer to any of numerous variants of the original function. One common version is the two-argument **Ackermann–Péter function** developed by Rózsa Péter and Raphael Robinson. Its value grows very rapidly; for example, results in , an integer of 19,729 decimal digits.^{[3]}

### History[edit]

In the late 1920s, the mathematicians Gabriel Sudan and Wilhelm Ackermann, students of David Hilbert, were studying the foundations of computation. Both Sudan and Ackermann are credited with discovering totalcomputable functions (termed simply "recursive" in some references) that are not primitive recursive. Sudan published the lesser-known Sudan function, then shortly afterwards and independently, in 1928, Ackermann published his function (from Greek, the letter *phi*). Ackermann's three-argument function, , is defined such that for , it reproduces the basic operations of addition, multiplication, and exponentiation as

and for *p* > 2 it extends these basic operations in a way that can be compared to the hyperoperations:

(Aside from its historic role as a total-computable-but-not-primitive-recursive function, Ackermann's original function is seen to extend the basic arithmetic operations beyond exponentiation, although not as seamlessly as do variants of Ackermann's function that are specifically designed for that purpose—such as Goodstein'shyperoperation sequence.)

In *On the Infinite*, David Hilbert hypothesized that the Ackermann function was not primitive recursive, but it was Ackermann, Hilbert's personal secretary and former student, who actually proved the hypothesis in his paper *On Hilbert's Construction of the Real Numbers*.

Rózsa Péter and Raphael Robinson later developed a two-variable version of the Ackermann function that became preferred by almost all authors.

The generalized hyperoperation sequence, e.g. , is a version of the Ackermann function as well.

In 1963 R.C. Buck based an intuitive two-variable ^{[n 1]} variant on the hyperoperation sequence:

Compared to most other versions, Buck's function has no unessential offsets:

Many other versions of Ackermann function have been investigated.

### Definition[edit]

### Definition: as m-ary function[edit]

Ackermann's original three-argument function is defined recursively as follows for nonnegative integers and :

Of the various two-argument versions, the one developed by Péter and Robinson (called "the" Ackermann function by most authors) is defined for nonnegative integers and as follows:

The Ackermann function has also been expressed in relation to the hyperoperation sequence:

- or, written in Knuth's up-arrow notation (extended to integer indices ):
- or, equivalently, in terms of Buck's function F:

### Definition: as iterated 1-ary function[edit]

Define as the *n*-th iterate of :

Iteration is the process of composing a function with itself a certain number of times. Function composition is an associative operation, so .

Conceiving the Ackermann function as a sequence of unary functions, one can set .

The function then becomes a sequence of unary^{[n 2]} functions, defined from iteration:

### Computation[edit]

The recursive definition of the Ackermann function can naturally be transposed to a term rewriting system (TRS).

### TRS, based on 2-ary function[edit]

The definition of the ** 2-ary** Ackermann function leads to the obvious reduction rules

**Example**

Compute

The reduction sequence is ^{[n 3]}

Leftmost-outermost (one-step) strategy: | Leftmost-innermost (one-step) strategy: |

To compute one can use a stack, which initially contains the elements .

Then repeatedly the two top elements are replaced according to the rules^{[n 4]}

Schematically, starting from :

**WHILE**stackLength <> 1 {

**POP**2 elements;

**PUSH**1 or 2 or 3 elements, applying the rules r1, r2, r3 }

The pseudocode is published in Grossman & Zeitman (1988).

For example, on input ,

the stack configurations | reflect the reduction^{[n 5]} |

**Remarks**

- The leftmost-innermost strategy is implemented in 225 computer languages on Rosetta Code.
- For all the computation of takes no more than steps.
- Grossman & Zeitman (1988) pointed out that in the computation of the maximum length of the stack is , as long as .

- Their own algorithm, inherently iterative, computes within time and within space.

### TRS, based on iterated 1-ary function[edit]

The definition of the iterated __ 1-ary__ Ackermann functions leads to different reduction rules

As function composition is associative, instead of rule r6 one can define

Like in the previous section the computation of can be implemented with a stack.

Initially the stack contains the three elements .

Then repeatedly the three top elements are replaced according to the rules^{[n 4]}