I’ve been rather lackadaisical about fixing my “big number” classes, but I’ve finally gotten up off the couch, and I have new versions available.
I’ve changed the names of the bigint and bigdec classes to integer and decimal, respectively, because I thought the “big…” names smelled of Java. I also wrote a small Web page that ties the three classes together.
In a comment to a previous post, Andrew Dalke suggested some additional values I could test; and, sure enough, I found a bug (thanks). (The bug was actually in what’s now called the integer class: I hadn’t guarded against aliasing of the operands to expressions like
I think I remember someone suggesting that some users might prefer classes with “more features”. What additional features did you have in mind? Don’t suggest trig. functions and the like: I’ve limited the <cmath>-like functions that take decimal and rational arguments to those that return exact values. (You might be able to talk me into square root, but that would be successive approximation using Newton’s method which is what I know how to write. I already have a version of the rational class that has quiet NaNs and infinities and a spaceship operator, but I’m not sure I like it.)
As I’ve said before, these classes are not intended for serious numerical work; and numerics experts probably already know where to find better implementations, or could write such themselves.
Update, 2024-02-05: I woke up this morning having in my mind a way to make rational comparisons a bit quicker, so I made that change. I also noticed that I had failed to remove an
By the way, if anybody out there has access to a C++ implementation where FLT_RADIX != 2, I’d appreciate a test of the
Peter B says
I found a reason to have floating point numbers with two exponents: conversion of numbers with large positive or large negative powers of 10. This method would be useful for powers of 10 like 5000 or -5000. With a 96 or 128-bit mantissa, it would have excellent fidelity with a resulting 64-bit mantissa. The general method can be extended indefinitely.
If interested, I will post the method.
billseymour says
Peter B: yes, I’d be interested. If it’s too much for a comment, you can send me what you want to say in e-mail, and I could make it a “guest post” on this blog (not a precedent I want to set, but maybe worthwhile in this case).
Andrew Dalke says
This new version converts -0.9999999999999999 (nextafter(-1.0, 0)) to -4503599627370496/4503599627370497 which is -0.9999999999999998. The previous version generated the correct ratio -9007199254740992/9007199254740993.
BTW, computing -((double) 9007199254740992)/((double) 9007199254740993) or static_cast ratio gives 1.0 instead of the expected -0.9999999999999999, because computing 2^53/(2^53+1) (I’m using ^ to denote exponentiation) appears to have a rounding error. Python’s bigint/bigint is at https://github.com/python/cpython/blob/39ec7fbba84663ab760853da2ac422c2e988d189/Objects/longobject.c#L4189, which also describes the algorithm they use.
billseymour says
Andrew,
I didn’t try the negative values; but compiled for Windows with double and long double both having the same representations (53-bit mantissas), both 0.9999999999999999 andnextafter(1.0,0.0) do indeed give me 4503599627370496/4503599627370497 like you said when I let the accuracy argument default to 0 so that I’m doing continued fractions. When I pass a NaN for the accuracy, which triggers the frexp() version, I get 9007199254740991/9007199254740992, the numerator and denominator each being one less that what you wrote. I’m guessing that that can be explained by using positive rather than negative values.
Using a more up-to-date GCC on Debian Linux with long double having 64-bit mantissas:
1. the continued fractions loop gives me the 9007199254740992/9007199254740993 that you want for 0.9999999999999999; butnextafter(1.0,0.0) gives me 9223372036854775808/9223372036854775809, an error of -5.42101086242752217e-20.
2. the code that uses frexp() gives me 9007199254740991/9007199254740992 for 0.9999999999999999, 18446744073709551615/18446744073709551616 for the nexafter business.
Peter B says
I found a reason to have floating point numbers with two exponents: conversion of numbers with large positive or large negative powers of 10. This method would be useful for powers of 10 like ±5000. With a 96 or 128-bit mantissa, it would have excellent fidelity with a resulting 64-bit mantissa. The general method can be extended indefinitely.
The method adds a power of 2 to the existing power of 10. It’s easy to multiply such numbers.
The internal form I use has the binary point followed by the mantissa upshifted until its most significant bit is all the way to the left. I.e. the mantissa is greater than or equal to 0.5 and less than 1.0. the mantissa is followed by a 16-bit power of ten and a 16-bit power of two.
Begin by building the original mantissa in binary as a power of 10 then normalize the mantissa by shifting it left and adjusting the power of 2 until its most significant bit is set.
The trick is to multiply the internal value by 1.0 using 1.0 with different powers of 10. For that, you need to make the magic 1.0 values.
Start by remembering 1/16 = 0.0625Then…
1.0 = 0.625 e-1 * 2^4
Now square 1.0 by squaring the mantissa and doubling both exponents
1.0 = .390626 e-2 * 2^8 then normalize
1.0 = .78125 e-2 * 2^7
Continue this process normalizing as needed until you have e-4096. These 13 constants can zero any power of ten as far as e8191.
For negative powers of ten start with
1.0 = 0.8 e+1 * 2^-3 then square
1.0 = 0.64 e+2 * 2^-6 and normalize as needed
Continue until you have e+4096.
These 13 constants can zero any power of 10 down to e-8191
For more precision use a longer mantissa; for absurd powers of ten make a longer list of constants.