Since the publication of our article our predictions
have been matched and more. All these calculations have been performed using methods described
within the paper. We detail the most interesting:

*
A related updated of our 1988
Scientific American Article [PostScript 112k] is also
available
*

** BILLIONS**

In 1989, The Chudnovsky Brothers computed **1,011,196,961** digits of using

This used an IBM3090/VF for 120hrs and a CRAY2 for 28hrs) In short order Kanada then computed
**1,073,741,799** digits () with a verification run in under 161 hours. He used the
[quartic algorithm] and the related Gauss-Salamin-Brent algorithm
[11] . In 1991 the
Chudnovskys then computed in excess of 2.16 billion digits (Many details of their largely home -built
computer are given in The Mountains of , ref. New Yorker). Early in 1995, Kanada computed in
excess of 3 billion digits after which the Chudnovskys reported (personal communication) that in
1994 they had passed the 4 billion mark:

Thank you very much for your message.

If it really matters, the "latest" round of calculations of was conducted last year as a part of testing of an upgrade to our machine. The actual end computation date was May 18, 1994 (as tapes and file dates indicate). The method used was still our favorite identity (from h(-163)=1). The core of bignum codes was the combinations of several "fast" convolution algorithms, applied depending on the length. The number of decimal digits of computed in that calculations was over4,044,000,000.

We will send you a reprint from Grosswald volume that touches on some techniques in our approach. Among amusing issues in this computations was the break of 32-bit addressing limit in file/array sizes. All computations were run instead in 64-bit (virtual) address space (and that required some tinkering with storage and tape devices).

We would be happy to get preprints of your projects.

With best wishes,

David and Gregory

The two most recent record computations were performed by Yasumasa Kanada and his colleagues
during 1995. It is worth emphasizing that each of his computations effectively performs the
equivalent of a few hundred full precision multiplications. A single such multiplication *
performed on the same machine but without using FFT ideas* would take several years.

The details follow. More is available at
the site
including more analysis of the digits.

Digits:Two independent calculations based on two different algorithms generated4,294,967,296 (=) decimal digits of and the two generated sequences matched4,294,967,286decimal digits, e.g., 10 decimal digits difference. Then we are declaring4,294,960,000decimal digits as the new world record.

October 12, 1995.

: Our latest record was established as follows:

** Declared record:**

**6,442,450,000** decimal digits.
Two independent calculations based on two different
algorithms generated **6,442,450,944** (=) decimal digits of
and the two generated sequences matched to 6,442,450,938 decimal
digits, e.g., 6 decimal digits difference. Then we are declaring
6,442,450,000 decimal digits as the new world record.

**
4,000,000,000-th digits of and :
Note :Not updated because of disk storage problem.
**

Note: First digit '3' for or '0' for is not included in the above count.

** Frequency distribution for up to 6,000,000,000 decimal places:**

** Frequency distribution for up to 6,000,000,000 decimal places:**

4,294,960,000-th digits of and ;

Note: Not updated because of disk storage problem. Note :First digit '3' for or '0' for is not included in the above count.

Programs were written by Mr. Daisuke TAKAHASHI, a member of Kanada Lab. CPU used was
HITAC S-3800/480 at the Computer Centre, University of Tokyo. Two CPUs were definitely used
through single job parallel processing for a total of four programs run.

Yasumasa KANADA

Computer Centre, University of Tokyo

Bunkyo-ku Yayoi 2-11-16

Tokyo 113 Japan

Fax : +81-3-3814-7231 (office)

E-mail: kanada@pi.cc.u-tokyo.ac.jp