![]() |
Home | Libraries | People | FAQ | More |
#include <boost/multiprecision/cpp_double_fp_backend.hpp>
namespace boost { namespace multiprecision { template <class FloatingPointType> class cpp_double_fp_backend; typedef number<cpp_double_fp_backend<float>, et_off> cpp_double_float; typedef number<cpp_double_fp_backend<double>, et_off> cpp_double_double; typedef number<cpp_double_fp_backend<long double>, et_off> cpp_double_long_double; typedef number<cpp_double_fp_backend<boost::float128_type>, et_off> cpp_double_float128; // Only when boost::float128_type is available } } // namespaces
The cpp_double_fp_backend
back-end is the sum of two IEEE floating-point numbers combined to create
a type having a composite width rougly twice that as one of its parts.
The cpp_double_fp_backend
back-end is used in conjunction with number
and acts as an entirely C++ header only floating-point number type.
The implementation relies on double-word arithmetic which is a technique
used to represent a real number as the sum of two floating-point numbers.
Other commonly used names for this include double-double
or double-word arithmetic.
The cpp_double_fp_backend
types have fixed width and do not allocate. The type cpp_double_double,
for instance, is composed of two built-in double
components. On most common systems, built-in double
is a double-precision IEEE floating-point number. This results in a cpp_double_double that has 106 binary
digits and approximately 32 decimal digits of precision.
The exponent ranges of the types are slightly limited (on the negative
side) compared to those of the composing type. Consider again the type
cpp_double_double, which
is built from two double-precision IEEE double-precision floating-point
numebers. On common systems, this type has a maximum decimal exponent of
308 (the same as one single double-precision floating point number). The
negative minimum exponent, however, is about -291, which is less range
than -307 from standalone double. The reason for the limitation is because
the composite lower-limb has lower value than its upper limb. The composite
type would easily underflow or become subnormal if the upper limb had its
usual minimum value.
There is full standard library and std::numeric_limits
support available for this type.
Note that the availability of cpp_double_float128
depends on the availability of boost::float128_type,
which can be queried at compile-time via the configuration macro BOOST_HAS_FLOAT128. This is available
at the moment predominantly with GCC compilers in GNU-standard mode and
(with GCC 14 and later) also in strict ANSI mode.
Run-time performance is a top-level requirement for the cpp_double_fp_backend
types. The types still do, however, support infinities, NaNs and (of course)
zeros. Signed negative zero, however, is not supported (in favor of efficiency).
All zeros are treated as positive.
The cpp_double_fp_backend
types interoperate with Boost.Math and Boost.Math.Constants. This offers
the wealth of Boost-related mathematical tools instantiated with the cpp_double_fp_backend types.
Things you should know when using the cpp_double_fp_backend
types:
-ffast-math can not be used (use either
the default or explicitly set -fno-fast-math). On MSVC /fp:fast can not be used and /fp:precise
(the default) is mandatory on MSVC compilers. This is because the algorithms,
in particular those for addition, subtraction, multiplication, division
and square root, rely on precise floating-point rounding.
std::numeric_limits specializations for
these types.
cpp_double_fp_backend
implementation are constexpr.
Future evolution is anticipated to make this library entirely constexpr.
cpp_bin_float value (which is a bit
awkward may be eliminated in future refinements).
The cpp_double_fp_backend
back-end has been inspired by original works and types. These include the
historical doubledouble
and more, as listed below.
doubledouble
library, 1998.
quad_float
in the NTL number-theory library https://libntl.org.
cpp_double_fp_backend
draft was originally created by Fahad Syed in Boost GSoC2021 multiprecision
project. Its source code can be found at https://github.com/BoostGSoC21/multiprecision.