High Precision Numerics
Why do we need more than double precision? Here are five problems that may require more than double precision arithmetic.
1. recognizing algebraic numbers
Given a sufficiently accurate floating point approximation to an algebraic number, we can compute its minimal polynomial by using a lattice basis reduction algorithm. This usually requires as many digits as there are in the largest coefficient (in absolute value) of the minimal polynomial multiplied by the number of terms in the minimal polynomial (the degree + 1). A simple example is to find the minimal polynomial of . This may be obtained if we start with a 60 digit approximation to and give its degree (which is 12) .
2. discovering identities
For example, find integers and such that . They can also be found by using a lattice basis reduction algorithm if we have sufficiently accurate approximations (at least 79 digits) to and ,
3. symbolic computation
Many algorithms which compute with polynomials inevitably generate large coefficients. An example is to find the square root of a polynomial by using a single point evaluation algorithm.
4. moment problem
The momemt problem is the problem of reconstructing a function , given its first few moments. For a given continuous function defined on an interval (say [0,1]) ,its first few moments are the integrals for an integer from to . To solve the moment problem we assume the solution is a linear combination of some basis functions, usually the monomials for from to . We compute the moments of our solution function and equate them to the given moments. This leads to the problem of inverting a Hilbert matrix, which is a known ill-conditioned problem. One way to combat the ill-conditioning is to increase the precision.
5. root of a polynomial
Suppose we expand the polynomial and round its coefficients to 16 digits. Are the roots of the rounded coefficient polynomial close to the exact roots of the original polynomial? If we round the roots of the rounded coefficient polynomial, will the first 16 digits of the roots agree with the first 16 digits of the roots of the exact coefficient polynomial?
The classical algorithm for multiplying 2 integers with d digits costs order d^2 hardware operations. This means that if we double the precision of the input numbers, then a multiplication will take four times longer to compute. New in Maple V release 5 is a faster integer multiplication algorithm. Maple uses the divide and conquer algorithm ( Karatsuba's algorithm). For this algorithm if we double the precision of the input numbers, then a multiplication will take three times longer to compute. We intend to use this algorithm to speed up the other basic operations , such as division, square root, factorial etc. For really large precision, an even faster method can be based on the fast Fourier transform. We intend to try this method to see if it has a break even point within the range of numbers that Maple can represent.
Fast symbolic computation
Maple already uses the divide and conquer algorithm to multiply 2 dense univariate polynomials. We intend to try other methods such as
(a) modular methods with Chinese remaindering to reconstruct the answer.
(b) Single point evaluation with genpoly used to reconstruct the answer.
(c) the fast Fourier transform method.
After multiplication has been sped up, then we can go on to speed up other operations such as division, gcd and factor.
GCD's with algebraic number coefficients
We are also trying to improve the speed of computing the greatest common divisor of two polynomials, in the case where the coefficients contain algebraic numbers. Here is a sample gcd problem. We then want to extend our methods to the multivariate case.
We can use high precision numerical computation to compute Bernoulli numbers exactly. For example the one thousandth Bernoulli number is a rational number with a 1779 digit numerator and a 9 digit denominator. We can use Euler's formula for Zeta(1000) and solve it for the one thousandth Bernoulli number and compute Zeta(1000) and Pi^1000 with 1800 digit precision and recover the exact rational Bernoulli number. This is faster than using the Maple command .
Traditional numerical operations
Such as numerical integration, numerical differential equation solving, polynomial equation solving, numerical linear algebra. We intend to speed up the above in the case that more than double precision is required again by using a fast integer multiply algorithm.