Ackermann schluesseldienst

Ackermann schluesseldienst

{H1}

Ackermann function

Quickly-growing function

This article is about the mathematical function. For other uses, see Ackermann (disambiguation).

In computability theory, the Ackermann function, named after Wilhelm Ackermann, is one of the simplest and earliest-discovered examples of a totalcomputable function that is not primitive recursive. All primitive recursive functions are total and computable, but the Ackermann function illustrates that not all total computable functions are primitive recursive.

After Ackermann's publication of his function (which had three non-negative integer arguments), many authors modified it to suit various purposes, so that today "the Ackermann function" may refer to any of numerous variants of the original function. One common version is the two-argument Ackermann–Péter function developed by Rózsa Péter and Raphael Robinson. Its value grows very rapidly; for example, {\displaystyle \operatorname {A} (4,2)} results in {\displaystyle 2^{65536}-3}, an integer of 19,729 decimal digits.[3]

History[edit]

In the late 1920s, the mathematicians Gabriel Sudan and Wilhelm Ackermann, students of David Hilbert, were studying the foundations of computation. Both Sudan and Ackermann are credited with discovering totalcomputable functions (termed simply "recursive" in some references) that are not primitive recursive. Sudan published the lesser-known Sudan function, then shortly afterwards and independently, in 1928, Ackermann published his function {\displaystyle \varphi } (from Greek, the letter phi). Ackermann's three-argument function, {\displaystyle \varphi (m,n,p)}, is defined such that for {\displaystyle p=0,1,2}, it reproduces the basic operations of addition, multiplication, and exponentiation as

{\displaystyle {\begin{aligned}\varphi (m,n,0)&=m+n\\\varphi (m,n,1)&=m\times n\\\varphi (m,n,2)&=m^{n}\end{aligned}}}

and for p > 2 it extends these basic operations in a way that can be compared to the hyperoperations:

{\displaystyle {\begin{aligned}\varphi (m,n,3)&=m[4](n+1)\\\varphi (m,n,p)&\gtrapprox m[p+1](n+1)&&{\text{for }}p>3\end{aligned}}}

(Aside from its historic role as a total-computable-but-not-primitive-recursive function, Ackermann's original function is seen to extend the basic arithmetic operations beyond exponentiation, although not as seamlessly as do variants of Ackermann's function that are specifically designed for that purpose—such as Goodstein'shyperoperation sequence.)

In On the Infinite, David Hilbert hypothesized that the Ackermann function was not primitive recursive, but it was Ackermann, Hilbert's personal secretary and former student, who actually proved the hypothesis in his paper On Hilbert's Construction of the Real Numbers.

Rózsa Péter and Raphael Robinson later developed a two-variable version of the Ackermann function that became preferred by almost all authors.

The generalized hyperoperation sequence, e.g. {\displaystyle G(m,a,b)=a[m]b}, is a version of the Ackermann function as well.

In 1963 R.C. Buck based an intuitive two-variable [n 1] variant {\displaystyle \operatorname {F} } on the hyperoperation sequence:

{\displaystyle \operatorname {F} (m,n)=2[m]n.}

Compared to most other versions, Buck's function has no unessential offsets:

{\displaystyle {\begin{aligned}\operatorname {F} (0,n)&=2[0]n=n+1\\\operatorname {F} (1,n)&=2[1]n=2+n\\\operatorname {F} (2,n)&=2[2]n=2\times n\\\operatorname {F} (3,n)&=2[3]n=2^{n}\\\operatorname {F} (4,n)&=2[4]n=2^{2^{2^{{}^{.^{.^{{}_{.}2}}}}}}\\&\quad \vdots \end{aligned}}}

Many other versions of Ackermann function have been investigated.

Definition[edit]

Definition: as m-ary function[edit]

Ackermann's original three-argument function {\displaystyle \varphi (m,n,p)} is defined recursively as follows for nonnegative integers {\displaystyle m,n,} and {\displaystyle p}:

{\displaystyle {\begin{aligned}\varphi (m,n,0)&=m+n\\\varphi (m,0,1)&=0\\\varphi (m,0,2)&=1\\\varphi (m,0,p)&=m&&{\text{for }}p>2\\\varphi (m,n,p)&=\varphi (m,\varphi (m,n-1,p),p-1)&&{\text{for }}n,p>0\end{aligned}}}

Of the various two-argument versions, the one developed by Péter and Robinson (called "the" Ackermann function by most authors) is defined for nonnegative integers {\displaystyle m} and {\displaystyle n} as follows:

{\displaystyle {\begin{array}{lcl}\operatorname {A} (0,n)&=&n+1\\\operatorname {A} (m+1,0)&=&\operatorname {A} (m,1)\\\operatorname {A} (m+1,n+1)&=&\operatorname {A} (m,A(m+1,n))\end{array}}}

The Ackermann function has also been expressed in relation to the hyperoperation sequence:

{\displaystyle A(m,n)={\begin{cases}n+1&m=0\\2[m](n+3)-3&m>0\\\end{cases}}}
or, written in Knuth's up-arrow notation (extended to integer indices {\displaystyle \geq -2}):
{\displaystyle ={\begin{cases}n+1&m=0\\2\uparrow ^{m-2}(n+3)-3&m>0\\\end{cases}}}
or, equivalently, in terms of Buck's function F:
{\displaystyle ={\begin{cases}n+1&m=0\\F(m,n+3)-3&m>0\\\end{cases}}}

Definition: as iterated 1-ary function[edit]

Define {\displaystyle f^{n}} as the n-th iterate of {\displaystyle f}:

{\displaystyle {\begin{array}{rll}f^{0}(x)&=&x\\f^{n+1}(x)&=&f(f^{n}(x))\end{array}}}

Iteration is the process of composing a function with itself a certain number of times. Function composition is an associative operation, so {\displaystyle f(f^{n}(x))=f^{n}(f(x))}.

Conceiving the Ackermann function as a sequence of unary functions, one can set {\displaystyle \operatorname {A} _{m}(n)=\operatorname {A} (m,n)}.

The function then becomes a sequence {\displaystyle \operatorname {A} _{0},\operatorname {A} _{1},\operatorname {A} _{2},...} of unary[n 2] functions, defined from iteration:

{\displaystyle {\begin{array}{lcl}\operatorname {A} _{0}(n)&=&n+1\\\operatorname {A} _{m+1}(n)&=&\operatorname {A} _{m}^{n+1}(1)\\\end{array}}}

Computation[edit]

The recursive definition of the Ackermann function can naturally be transposed to a term rewriting system (TRS).

TRS, based on 2-ary function[edit]

The definition of the 2-ary Ackermann function leads to the obvious reduction rules

{\displaystyle {\begin{array}{lll}{\text{(r1)}}&A(0,n)&\rightarrow &S(n)\\{\text{(r2)}}&A(S(m),0)&\rightarrow &A(m,S(0))\\{\text{(r3)}}&A(S(m),S(n))&\rightarrow &A(m,A(S(m),n))\end{array}}}

Example

Compute {\displaystyle A(1,2)\rightarrow _{*}4}

The reduction sequence is [n 3]

Leftmost-outermost (one-step) strategy:            Leftmost-innermost (one-step) strategy:
{\displaystyle {\underline {A(S(0),S(S(0)))}}}{\displaystyle {\underline {A(S(0),S(S(0)))}}}
    {\displaystyle \rightarrow _{r3}{\underline {A(0,A(S(0),S(0))}})}    {\displaystyle \rightarrow _{r3}A(0,{\underline {A(S(0),S(0))}})}
    {\displaystyle \rightarrow _{r1}S({\underline {A(S(0),S(0))}})}    {\displaystyle \rightarrow _{r3}A(0,A(0,{\underline {A(S(0),0)}}))}
    {\displaystyle \rightarrow _{r3}S({\underline {A(0,A(S0,0))}})}    {\displaystyle \rightarrow _{r2}A(0,A(0,{\underline {A(0,S(0))}}))}
    {\displaystyle \rightarrow _{r1}S(S({\underline {A(S(0),0)}}))}    {\displaystyle \rightarrow _{r1}A(0,{\underline {A(0,S(S(0)))}})}
    {\displaystyle \rightarrow _{r2}S(S({\underline {A(0,S(0))}}))}    {\displaystyle \rightarrow _{r1}{\underline {A(0,S(S(S(0))))}}}
    {\displaystyle \rightarrow _{r1}S(S(S(S(0))))}    {\displaystyle \rightarrow _{r1}S(S(S(S(0))))}

To compute {\displaystyle \operatorname {A} (m,n)} one can use a stack, which initially contains the elements {\displaystyle \langle m,n\rangle }.

Then repeatedly the two top elements are replaced according to the rules[n 4]

{\displaystyle {\begin{array}{lllllllll}{\text{(r1)}}&0&,&n&\rightarrow &(n+1)\\{\text{(r2)}}&(m+1)&,&0&\rightarrow &m&,&1\\{\text{(r3)}}&(m+1)&,&(n+1)&\rightarrow &m&,&(m+1)&,&n\end{array}}}

Schematically, starting from {\displaystyle \langle m,n\rangle }:

WHILE stackLength <> 1 { POP 2 elements; PUSH 1 or 2 or 3 elements, applying the rules r1, r2, r3 }

The pseudocode is published in Grossman & Zeitman (1988).

For example, on input {\displaystyle \langle 2,1\rangle },

the stack configurations    reflect the reduction[n 5]
{\displaystyle {\underline {2,1}}}{\displaystyle {\underline {A(2,1)}}}
    {\displaystyle \rightarrow 1,{\underline {2,0}}}    {\displaystyle \rightarrow _{r1}A(1,{\underline {A(2,0)}})}
    {\displaystyle \rightarrow 1,{\underline {1,1}}}    {\displaystyle \rightarrow _{r2}A(1,{\underline {A(1,1)}})}
    {\displaystyle \rightarrow 1,0,{\underline {1,0}}}    {\displaystyle \rightarrow _{r3}A(1,A(0,{\underline {A(1,0)}}))}
    {\displaystyle \rightarrow 1,0,{\underline {0,1}}}    {\displaystyle \rightarrow _{r2}A(1,A(0,{\underline {A(0,1)}}))}
    {\displaystyle \rightarrow 1,{\underline {0,2}}}    {\displaystyle \rightarrow _{r1}A(1,{\underline {A(0,2)}})}
    {\displaystyle \rightarrow {\underline {1,3}}}    {\displaystyle \rightarrow _{r1}{\underline {A(1,3)}}}
    {\displaystyle \rightarrow 0,{\underline {1,2}}}    {\displaystyle \rightarrow _{r3}A(0,{\underline {A(1,2)}})}
    {\displaystyle \rightarrow 0,0,{\underline {1,1}}}    {\displaystyle \rightarrow _{r3}A(0,A(0,{\underline {A(1,1)}}))}
    {\displaystyle \rightarrow 0,0,0,{\underline {1,0}}}    {\displaystyle \rightarrow _{r3}A(0,A(0,A(0,{\underline {A(1,0)}})))}
    {\displaystyle \rightarrow 0,0,0,{\underline {0,1}}}    {\displaystyle \rightarrow _{r2}A(0,A(0,A(0,{\underline {A(0,1)}})))}
    {\displaystyle \rightarrow 0,0,{\underline {0,2}}}    {\displaystyle \rightarrow _{r1}A(0,A(0,{\underline {A(0,2)}}))}
    {\displaystyle \rightarrow 0,{\underline {0,3}}}    {\displaystyle \rightarrow _{r1}A(0,{\underline {A(0,3)}})}
    {\displaystyle \rightarrow {\underline {0,4}}}    {\displaystyle \rightarrow _{r1}{\underline {A(0,4)}}}
    {\displaystyle \rightarrow 5}    {\displaystyle \rightarrow _{r1}5}

Remarks

  • The leftmost-innermost strategy is implemented in 225 computer languages on Rosetta Code.
  • For all {\displaystyle m,n} the computation of {\displaystyle A(m,n)} takes no more than {\displaystyle (A(m,n)+1)^{m}} steps.
  • Grossman & Zeitman (1988) pointed out that in the computation of {\displaystyle \operatorname {A} (m,n)} the maximum length of the stack is {\displaystyle \operatorname {A} (m,n)}, as long as {\displaystyle m>0}.
Their own algorithm, inherently iterative, computes {\displaystyle \operatorname {A} (m,n)} within {\displaystyle {\mathcal {O}}(m\operatorname {A} (m,n))} time and within {\displaystyle {\mathcal {O}}(m)} space.

TRS, based on iterated 1-ary function[edit]

The definition of the iterated 1-ary Ackermann functions leads to different reduction rules

{\displaystyle {\begin{array}{lll}{\text{(r4)}}&A(S(0),0,n)&\rightarrow &S(n)\\{\text{(r5)}}&A(S(0),S(m),n)&\rightarrow &A(S(n),m,S(0))\\{\text{(r6)}}&A(S(S(x)),m,n)&\rightarrow &A(S(0),m,A(S(x),m,n))\end{array}}}

As function composition is associative, instead of rule r6 one can define

{\displaystyle {\begin{array}{lll}{\text{(r7)}}&A(S(S(x)),m,n)&\rightarrow &A(S(x),m,A(S(0),m,n))\end{array}}}

Like in the previous section the computation of {\displaystyle \operatorname {A} _{m}^{1}(n)} can be implemented with a stack.

Initially the stack contains the three elements {\displaystyle \langle 1,m,n\rangle }.

Then repeatedly the three top elements are replaced according to the rules[n 4]

Источник: https://en.wikipedia.org/wiki/Ackermann_function