How does Micropython avoid double precision issues?
Posted: Sun Nov 18, 2018 5:25 pm
After implementing doubles, I wanted to test double equality. But to my surprise, 0.1+0.2==0.3 returns True instead of False, although on my PC, Python3 does return False.
I also tried 0.1+0.2-0.3, which returns 5.551115123125783e-17 on my PC, but 0.0 on Micropython.
I thought avoiding double precision issues was only possible with BCD, but from what I've seen, Micropython floats are plain old IEEE754 floats.
- How does Micropython avoid these kinds of double precision issues?
- Is there an operation where a BCD system (used by TI/casio calculators) would return a different answer from Micropython?
I also tried 0.1+0.2-0.3, which returns 5.551115123125783e-17 on my PC, but 0.0 on Micropython.
I thought avoiding double precision issues was only possible with BCD, but from what I've seen, Micropython floats are plain old IEEE754 floats.
- How does Micropython avoid these kinds of double precision issues?
- Is there an operation where a BCD system (used by TI/casio calculators) would return a different answer from Micropython?