-
Notifications
You must be signed in to change notification settings - Fork 142
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Is this valid? Should it be another algorithm for arb_poly_eval? #413
Comments
Rough code:
|
And the |
In some circumstances, you can maybe also figure out monotonicity conditions to determine optimal upper/lower bounds. |
Brilliant. Yes, a Taylor enclosure is just what's needed. I think a linear enclosure gives the results that I got above; that's easy to do and fits within the calling sequence of
where the user allocates and initializes I expect (optimistically) that I'll be able to implement this in the next week or two. |
Wait -- in my mind I was going to make |
For the time being I would just use I think two separate issues here are how to evaluate an arb_poly once and how to precondition/precompile a polynomial or power series for fast and accurate repeated evaluations on a fixed interval. You'd ideally want some kind of dedicated, opaque data structure for the latter. Basically, you'd want to truncate the coefficients to the optimal precision individually, pack them into a single mpn array, then use Horner's rule in mpn arithmetic for the evaluation. You could further split such a representation into a multiprecision body and a In any case, the interface you proposed is a good start for doing something a bit simpler (and certainly good enough for low-degree approximations). I would just swap I disagree that "we expect this to be used for small, constant |
Will do. Thanks! |
Hi @fredrik-johansson - just wondering if you had any further thoughts on this. There's a pull request at #414. If it's just that you currently don't have time for this, that's fine, too - then we'll merge something similar to this into our own wrapper library. Thanks! |
It looks like this will sometimes give a much tighter box than the other algorithms: when evaluating a polynomial at
x = x0 +/- delta
, first do a Taylor shift byx0 +/- 0
, then evaluate at0 +/- delta
. For example, for49*x^4 - 188*x^2 + 72*x + 292
(radius 0 for all coefficients) atx = -1.5 +/- 0.1
, I see:9.0625 +/- 70.1889
for_arb_poly_evaluate_horner
;9.0625 +/- 138.544
for_arb_poly_evaluate_rectangular
;9.0625 +/- 70.1889
for_arb_poly_taylor_shift_horner
then taking the constant coefficient;9.0625 +/- 138.544
for_arb_poly_taylor_shift_divconquer
then taking the constant coefficient;9.0625 +/- 138.544
for_arb_poly_taylor_shift_convolution
then taking the constant coefficient;9.0625 +/- 7.5839
for this new approach.If instead we take
x = -1.5 +- 1e-6
, we see:9.0625 +/- 6.36001e-4
for the*_horner
options above;9.0625 +/- 1.2975e-3
for the other three options above;9.0625 +/- 2.55005e-5
for the new approach.The text was updated successfully, but these errors were encountered: