Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Failure to capture full precision for low (maybe subnormal) values #145

Closed
ckormanyos opened this issue Jan 29, 2023 · 10 comments
Closed

Failure to capture full precision for low (maybe subnormal) values #145

ckormanyos opened this issue Jan 29, 2023 · 10 comments

Comments

@ckormanyos
Copy link
Member

ckormanyos commented Jan 29, 2023

When investigating #144, the problem is actually string read of a small number.

The reduced test case finds that cpp_double_double loses precision when reading in the following control value.

$$8.9699910066184047911669732896043282222388381719332458957375{\phantom{.}}{\times}10^{-292}$$

Test code follows below.

#include <iomanip>
#include <iostream>

#include <boost/multiprecision/cpp_double_fp.hpp>
#include <boost/multiprecision/cpp_dec_float.hpp>

int main()
{
  using boost::multiprecision::cpp_double_double;

  using float_type = typename cpp_double_double::backend_type::float_type;

  using dec_float_type = boost::multiprecision::number<boost::multiprecision::cpp_dec_float<31>, boost::multiprecision::et_off>;

  const auto ctrl1 = cpp_double_double("0.89699910066184047911669732896043282222388381719332458957375089781363329036291716592940465828282353321e-291");
  const auto ctrl2 = dec_float_type   ("0.89699910066184047911669732896043282222388381719332458957375089781363329036291716592940465828282353321e-291");

  const auto flg = std::cout.flags();

  std::cout << std::setprecision(static_cast<std::streamsize>(std::numeric_limits<cpp_double_double>::digits10)) << ctrl1 << std::endl;
  std::cout << std::setprecision(static_cast<std::streamsize>(std::numeric_limits<dec_float_type   >::digits10)) << ctrl2 << std::endl;

  std::cout.flags(flg);
}
@ckormanyos
Copy link
Member Author

ckormanyos commented Jan 29, 2023

I think we might be able to fix this by scaling up small numbers (via ldexp() or similar) at the time of reading them and subsequently scaling the result down when string conversion is finished.

@cosurgi
Copy link
Collaborator

cosurgi commented Jan 29, 2023

Thank you for investigating this. Indeed it looks like subnormal numbers.

@sinandredemption
Copy link
Collaborator

Over the top of my head question (I could be entirely silly here): is -292 within the supported exponent range? I think -308 + 106 / 3.3 == -273 (+-1) is the minimum supported exponent. Below this, everything falls into subnormal (for our case: zero) territory.

@ckormanyos

@ckormanyos
Copy link
Member Author

ckormanyos commented Jan 29, 2023

Over the top of my head question (I could be entirely silly here): is $-292$ within the supported exponent range? I think $-308 + 106 / 3.3 = -273 {\phantom{.}}{\pm}1$ is the minimum supported exponent.

You are reading my thoughts. I'm not sure about this one. I think we can hold numbers down to the currently set understanding of the minimum value, but we need to be careful when reading them in.

I'm also not entirely sure here.

Let's take the minimum value of double, say std::numeric_limits<double>::min(). Suppose we set the low part of a cpp_double_double to that value. Then I could do something like left-shifting that (ldexp() for floating-point types) up to the high part. And we could still fit another minimum value into the low part.

So it makes sense that the minimum value is approximately of the order we have. I'm not exactly sure if this is correct. But I think it might be. Then we need to ensure that we have the right value set for cpp_double-fp<>::my_value_min().

Thoughts?

Cc: @sinandredemption and @cosurgi

@sinandredemption
Copy link
Collaborator

Suppose we set the low part of a cpp_double_double to that value. Then I could do something like left-shifting that (ldexp() for floating-point types) up to the high part. And we could still fit another minimum value into the low part.

You're absolutely right.
I think that's what is currently implemented in my_value_min(). The only problem is we don't have enough checks (during arithmetic operations, for example) for underflows.

@ckormanyos
Copy link
Member Author

ckormanyos commented Jan 29, 2023

You're absolutely right.

I think that's what is currently implemented in my_value_min().

Indeed. I had also yesterday changed the value of cpp_double_fp<>::my_value_min() to reflect this. Have a look and see if you agree?

The only problem is we don't have enough checks (during arithmetic operations, for example) for underflows.

Yes. We have a few edge cases that we still need to catch in the operations. We have already caught quite a few of them. But i fear we still need to find the remaining hiding ones, that we missed in the obvious first passes.

In addition, I will also very soon PR some slight modifications to the rd_string() method which should allow us to properly read in those small-valued string representations that are near, but slightly above the minimum value.

I will get these new changes in ASAP.

Thanks Fahad! (@sinandredemption)

@ckormanyos
Copy link
Member Author

CI is cycling now in #146

@sinandredemption
Copy link
Collaborator

Yes. We have a few edge cases that we still need to catch in the operations.

This sounds like a job #136 could handle. I will work on this and report back to you.

@ckormanyos
Copy link
Member Author

ckormanyos commented Jan 29, 2023

have a few edge cases

a job #136 could handle

Indeed Fahad (@sinandredemption). I believe that is a good place to hammer the limits and edge cases. There will be a bit of tiny work forthcoming from my side on rd_string(). I need to handle this for numbers very close to, but barely above, the minimum value.

Other than that, I think tightening up our detection of edge cases will be helpful for the failing tests.

The failing specfun tests in the latest CI are now down to 3 files:

  • test_ibeta_inv_1.cpp
  • test_carlson_2.cpp
  • test_bessel_k.cpp

The failures in BesselK are trivially insignificant, all related to cyl_bessel_k(-1000, 700).

I'll look into the above two files. i think the ibetac_inv() failures are interesting since they involve a failure of a root-finding iteration to converge. And this does not seem like a tolerance problem (well maybe it is). But it could also be an algebraic problem.

I wil linvestigate and isolate a root (and its brackets) that does not coverge.

Keep going on tightening up the edge case in general Fahad and we will keep moving forward.

kind regards, Chris

Cc: @cosurgi

@ckormanyos
Copy link
Member Author

Fixed by #146

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants