I was intrigued by the news report that researchers had calculated the value of pi to 68.2 *trillion* figures. Pi (defined as the number obtained when the circumference of a circle is divided by the diameter) is a massively important quantity that occurs all over the place but it is hard to imagine what value is gained by achieving such a high level of precision. So my first reaction was, of course, “Why bother?”

This article looks at how much precision is neccssary for any practical application.

Mathematicians have estimated that an approximation of pi to 39 digits is sufficient for most cosmological calculations – accurate enough to calculate the circumference of the observable universe to within the diameter of a single hydrogen atom.

…Given that even calculating pi to 1,000 digits is practical overkill, why bother going to 62.8tn decimal places?

De Gier compares the feat to the athletes at the Olympic Games. “World records: they’re not useful by themselves, but they set a benchmark and they teach us about what we can achieve and they motivate others.

“This is a benchmarking exercise for computational hardware and software,” he says.

Harvey agrees: “It’s a computational challenge – it is a really seriously difficult thing to do and it involves lots of mathematics and these days computer science.“There’s plenty of other interesting constants in mathematics: if you’re into chaos theory there’s Feigenbaum constants, if you’re into analytic number theory there’s Euler’s gamma constant.

“There’s lots of other numbers you could try to calculate: e, the natural logarithm base, you could calculate the square root of 2. Why do you do pi? You do pi because everyone else has been doing pi,” he says. “That’s the particular mountain everyone’s decided to climb.”

I became even more intrigued when I read that it had taken 108 days and nine hours on a supercomputer. Getting time on a supercomputer is not easy and you have to make a strong case to the overseeing committee for why you need it. I doubt that “because we can” or getting into the Guinness Book of Records would cut it. It surely has to be that you are using the pi calculation to test out some new algorithm that has more practical applications. The report says that their algorithm was “almost twice as fast as the record Google set using its cloud in 2019, and 3.5 times as fast as the previous world record in 2020” and the technique could be helpful in “RNA analysis, simulations of fluid dynamics and textual analysis”.

The other interesting question is how anyone would be able to check that they got it right.

billseymour says

I guess when they get to 68.3 trillion, they can verify the first 68.2. 😎

consciousness razor says

Auto-translated text from the researchers’ website:

So, that’s not about coming up with an algorithm that’s very carefully tailored to calculate the digits of pi as efficiently as possible but is also general-purpose by having some other range of practical applications. (I only really know enough about programming to be dangerous, but those tend to be conflicting goals.)

Note that what they talked about in your final quote was their “experience,” or you could call it “expertise. That’s not an algorithm.

As they say, calculating it is useful as a way to measure performance: an “exercise” that can put systems of hardware and software and the human teams who run them to the test. So, not useless. It’s just not likely to give us any mathematical insight, and no scientific discoveries or new inventions will ever depend on knowing those digits of pi.

I doubt any of it involves anything very concrete or specific, which can be carried over more or less directly into “RNA analysis, simulations of fluid dynamics and textual analysis.” In all sorts of important senses, those are not at all like calculating pi. Those really just sound like hot topics that might grab some people’s attention in a press release or news article. This probably tells you more about what comes to mind for this particular person, when they have to think of “practical/useful” computations that do get lots of grant money, which people could be doing instead of calculating digits of pi.

But more relevantly, those things do also require heavy-duty computers, which can have their performance measured in various ways through “test” calculations like this. So, it serves a purpose, but as he said, the same could just as well be done with way-more-digits-than-you-ever-need from any other irrational number, like the square root of seven or what have you.

Matt G says

Where did they find a circle big enough, and a tape measure long enough?

Holms says

Bleh. The same machine working on say protein folding would have had much more practical benefit.

Pierce R. Butler says

Finding an irrational number clearly requires an irrational process of solution.

abbeycadabra says

“The 18,446,744,073,709,551,615th digit of pi is 4. Change my mind.”

Allison says

The calculation of pi to all these digits requires development of new techniques for doing enormous numbers of calculations in a reasonable length of time (i.e., like, less than a year.) Protein folding also requires enormous numbers of calculations, so this is a first step towards producing more or less usable programs for calculating protein folding.

In the old days, we were developing ever faster supercomputers to further speed things up, but once the US government stopped funding high-performance computing, that development kind of stalled.

John Morales says

Regarding how to check, the traditional way is to do the computation to that many digits again, but using a different method.

WMDKitty -- Survivor says

Why?

Because they can.

dangerousbeans says

it seems that the import thing is that they calculated it in Y time, not that it was X digits. we should change the rules and make it like the hour cycling record: limited number of cores and 24 hours, do as many digits as you can.

KG says

AFAIK, it’s still unproven that the decimal (or binary, or any other base) expansion of pi is “normal”, meaning that every subsequence of digits of the same length appears with asymptotically equal frequency, although (also AFAIK), all relevant experts believe that it is. Any measureable

departurefrom normality in those 68.2 trillion digits would be mathematically very interesting.John Morales says

KG, remember the conceit in Sagan’s

Contact?(Wikipedia, my notation change)

“When Ellie looks at what the computer has found, she sees a circle rasterized from 0s and 1s that appear after [10E20] places in the base 11 representation of π.”

grahamjones says

The researchers’ web site has an English version. From

https://www.fhgr.ch/en/specialist-areas/applied-future-technologies/davis-centre/pi-challenge/#c15666

They’re only using 64 cores, roughly like ten desktops. 1Tb RAM + 512Tb hard discs, roughly like hundred desktops. Their supercomputer uses less than 2kw power. I think they’re basically testing (and showing off) their file system and virtual memory implementation.

@abbeycadabra, see the Verification section, or look up the Bailey-Borwein-Plouffe formula on Wikipedia.

friedfish2718 says

Hmmm. No one on this forum seems to be aware that the N-th binary digit (0 or 1) of PI can be calculated in polynomial time.

.

The N-th decimal digit (0 to 9) of PI CANNOT be calculated in polynomial time.

.

Binary is most fundamental to Math and Physics.

KG says

friedfish2718@14,

Utter drivel from you, as ever. The Chudnovsky algorithm, which was used in the computation of 68.2 trillion decimal digits that is the subject of this thread, has a time complexity of O(n log(n)^3) where n is the number of digits to be calculated. This is, clearly, a lesser time complexity than O(n^4), which is polynomial. In fact, it’s a lesser time complexity than O(n^a) where a is any number greater than 1. Go to the bottom of the class, and put on the dunce’s cap.