Is floating point arithmetic stable?

Steve :

I know that floating point numbers have precision and the digits after the precision is not reliable.

But what if the equation used to calculate the number is the same? can I assume the outcome would be the same too?

for example we have two float numbers x and y. Can we assume the result x/y from machine 1 is exactly the same as the result from machine 2? I.E. == comparison would return true

Jon Skeet :

But what if the equation used to calculate the number is the same? can I assume the outcome would be the same too?

No, not necessarily.

In particular, in some situations the JIT is permitted to use a more accurate intermediate representation - e.g. 80 bits when your original data is 64 bits - whereas in other situations it won't. That can result in seeing different results when any of the following is true:

  • You have slightly different code, e.g. using a local variable instead of a field, which can change whether the value is stored in a register or not. (That's one relatively obvious example; there are other much more subtle ones which can affect things, such as the existence of a try block in the method...)
  • You are executing on a different processor (I used to observe differences between AMD and Intel; there can be differences between different CPUs from the same manufacturer too)
  • You are executing with different optimization levels (e.g. under a debugger or not)

From the C# 5 specification section 4.1.6:

Floating-point operations may be performed with higher precision than the result type of the operation. For example, some hardware architectures support an "extended" or "long double" floating-point type with greater range and precision than the double type, and implicitly perform all floating-point operations using this higher precision type. Only at excessive cost in performance can such hardware architectures be made to perform floating-point operations with less precision, and rather than require an implementation to forfeit both performance and precision, C# allows a higher precision type to be used for all floating-point operations. Other than delivering more precise results, this rarely has any measurable effects. However, in expressions of the form x * y / z, where the multiplication produces a result that is outside the double range, but the subsequent division brings the temporary result back into the double range, the fact that the expression is evaluated in a higher range format may cause a finite result to be produced instead of an infinity.

Collected from the Internet

Please contact [email protected] to delete if infringement.

edited at
0

Comments

0 comments
Login to comment

Related

floating point arithmetic in gnuradio

Floating Point Arithmetic: Deriving Wobble

Integer Conversion in Floating Point Arithmetic

Floating point arithmetic not working as expected

Replicating floating point arithmetic WITHOUT bc utility

How to correctly deal with floating point arithmetic in Python?

Floating point arithmetic: summation versus multiplication of error

Verilog floating point arithmetic at compile time?

Simple floating point arithmetic in VBA throwing error

How does an AVR perform floating point Arithmetic

Floating point arithmetic not producing exact results

Arithmetic comparison to avoid floating point errors

Dealing with floating point number precision in Go arithmetic?

Performing Javascript floating point arithmetic in Python

Floating-Point Arithmetic in Computer Science

perl integer arithmetic giving floating point answer

C: Is the result of floating point arithmetic ALWAYS normalized?

How are results rounded in floating-point arithmetic?

Rounding error in floating-point arithmetic

How to do floating point arithmetic in substrate runtime

Under flow with floating point arithmetic checking

Surprising result using awk floating point arithmetic

Standard guarantees for using floating point arithmetic to represent integer operations

"Invalid Arithmetic Operator" when doing floating-point math in bash

Are x<=y and x-y<=0 equivalent in floating point arithmetic?

Half-precision floating-point arithmetic on Intel chips

Why does Groovy perform floating point arithmetic this way?

What are the benefits of symmetric level index arithmetic (alternative to floating point)?

Floating-point arithmetic: why would order of addition matter?