Contents

One of the interesting morals from theoretical computer science is that many familiar algorithms are far from optimal. In order to be more precise we introduce the notion of

Multiplication is a different story. Usual multiplication of two **n**-digit integers
has bit complexity and no better. However, it is possible to multiply two
**n**-digit integers with complexity . This result is due to
Schönhage and Strassen and dates from 1971 [29]. It provides the best
bound known for multiplication. No multiplication can have speed better than
. Unhappily, more exact results aren't available.

The original observation that a faster than multiplication is possible was due to Karatsuba in 1962. Observe that

and thus multiplication of twoThe trick to implementing high precision arithmetic is to get the multiplication right. Division and root extraction piggyback off multiplication using Newton's method. One may use the iteration

to compute and the iteration to compute . One may also compute from and so avoid divisions in the computation of . Not only do these iterations converge quadratically but, because Newton's method is self-correcting (a slight perturbation in does not change the limit), it is possible at the
To see how the fast Fourier transform may be used to accelerate multiplication, let
and
be the representations of two high-precision numbers in some radix **b**. The radix
**b** is usually selected to be some power of 2 or 10 whose square is less than the
largest integer exactly representable as an ordinary floating-point number on the
computer being used. Then, except for releasing each ``carry'', the product of **x** and **y** may be written as

Now consider

Now a well-known result of Fourier analysis may be applied. Let denote the
* discrete Fourier transform* of the sequence **x**, and let denote the
inverse discrete Fourier transform of **x**:

A straightforward implementation of the above procedure would not result in any computational savings --- in fact, it would be several times more costly than the usual ``schoolperson'' scheme. The reason this scheme is used is that the discrete Fourier transform may be computed much more rapidly using some variation of the well-known ``fast Fourier transform'' (FFT) algorithm [13]. In particular, if , then the discrete Fourier transform may be evaluated in only arithmetic operations using an FFT. Direct application of the definition of the discrete Fourier transform would require floating-point arithmetic operations, even if it is assumed that all powers of have been precalculated.

This is the basic scheme for high-speed multiprecision multiplication. Many details
of efficient implementations have been omitted. For example, it is possible to take
advantage of the fact that the input sequences **x** and **y** and the output sequence **z**
are all purely real numbers, and thereby sharply reduce the operation count. Also, it
is possible to dispense with complex numbers altogether in favor of performing
computations in fields of integers modulo large prime numbers. Interested readers
are referred to [2], [8], [13], and [22].
When the costs of all the constituent operations, using the best known techniques,
are totalled both Algorithms 1 and 2 compute **n** digits of with bit complexity
, and use full precision operations.

The bit complexity for Sum 1, or for using any of the arctan expansions, is
between and depending on implementation. In each
case, one is required to sum terms of the appropriate series. Done naively,
one obtains the latter bound. If the calculation is carefully orchestrated so that
the terms are grouped to grow evenly in size (as rational numbers) then one can
achieve the former bound, but with no corresponding reduction in the number of
operations.
The Archimedean iteration of section 2 converges like so in excess of **n**
iterations are needed for **n**-digit accuracy, and the bit complexity is .

Almost any familiar transcendental number such as **e**, , , or
Catalan's constant (presuming the last three to be nonalgebraic) can be computed
with bit complexity or . None of these numbers
is known to be computable essentially any faster than this. In light of the
previous observation that algebraic numbers are all computable with bit complexity
, a proof that cannot be computed with this speed would imply the
transcendence of . It would, in fact, imply more, as there are transcendental
numbers which have complexity . An example is .
It is also reasonable to speculate that computing the **n**th digit of is not
very much easier than computing all the first **n** digits. We think it very probable
that computing the **n**th digit of cannot be .

Contents