This operation returns a 0:
string value = “0.01”;
float convertedValue = float.Parse(value);
return (int)(convertedValue * 100.0f);
But this operation returns a 1:
string value = “0.01”;
float convertedValue = float.Parse(value) * 100.0f;
return (int)(convertedValue);
Because the convertedValue is a float, and it is in parenthesis *100f shouldn't it still be treated as float operation?
The difference between the two lies in the way the compiler optimizes floating point operations. Let me explain.
string value = "0.01";
float convertedValue = float.Parse(value);
return (int)(convertedValue * 100.0f);
In this example, the value is parsed into an 80-bit floating point number for use in the inner floating point dungeons of the computer. Then this is converted to a 32-bit float for storage in the convertedValue variable. This causes the value to be rounded to, seemingly, a number slightly less than 0.01. Then it is converted back to an 80-bit float and multiplied by 100, increasing the rounding error 100-fold. Then it is converted to an 32-bit int. This causes the float to be truncated, and since it is actually slightly less than 1, the int conversion returns 0.
string value = "0.01";
float convertedValue = float.Parse(value) * 100.0f;
return (int)(convertedValue);
In this example, the value is parsed into an 80-bit floating point number again. It is then multiplied by 100, before it is converted to a 32-bit float. This means that the rounding error is so small that when it is converted to a 32-bit float for storage in convertedValue, it rounds to exactly 1. Then when it is converted to an int, you get 1.
The main idea is that the computer uses high-precision floats for calculations, and then rounds the values whenever they are stored in a variable. The more assignments you have with floats, the more the rounding errors accumulate.
Please read an introduction to floatingpoint. This is a typical floating point problem. Binary floating points can't represent 0.01 exactly.
0.01 * 100 is approximately 1.
If it happens to be rounded to 0.999... you get 0, and if it gets rounded to 1.000... you get 1. Which one of those you get is undefined.
The jit compiler is not required to round the same way every time it encounters a similar expression(or even the same expression in different contexts). In particular it can use higher precision whenever it wants to, but can downgrade to 32 bit floats if it thinks that's a good idea.
One interesting point is an explicit cast to float (even if you already have an expression of type float). This forces the JITer to reduce the precision to 32 bit floats at that point. The exact rounding is still undefined though.
Since the rounding is undefined, it can vary between .net versions, debug/release builds, the presence of debuggers (and possibly the phase of the moon :P).
Storage locations for floating-point numbers (statics, array elements, and fields of classes) are of fixed size. The
supported storage sizes are float32 and float64. Everywhere else (on the evaluation stack, as arguments, as
return types, and as local variables) floating-point numbers are represented using an internal floating-point
type.
When a floating-point value whose internal representation has greater range and/or precision than its nominal type is put in a storage location, it is automatically coerced to the type of the storage location. This can involve
a loss of precision or the creation of an out-of-range value (NaN, +infinity, or -infinity). However, the value might be retained in the internal representation for future use, if it is reloaded from the storage location without
having been modified. It is the responsibility of the compiler to ensure that the retained value is still valid at the time of a subsequent load, taking into account the effects of aliasing and other execution threads (see memory model (§12.6)). This freedom to carry extra precision is not permitted, however, following the execution of an explicit conversion (conv.r4 or conv.r8), at which time the internal representation must be
exactly representable in the associated type.
Your specific problem can be solved by using Decimal, but similar problems with 3*(1/3f) won't be solved by this, since Decimal can't represent one third exactly either.
In this line:
(int)(convertedValue * 100.0f)
The intermediate value is actually of higher precision, not simply a float. To obtain identical results to the second one, you'd have to do:
(int)((float)(convertedValue * 100.0f))
On the IL level, the difference looks like:
mul
conv.i4
versus your second version:
mul
stloc.3
ldloc.3
conv.i4
Note that the second one store/restores the value in a float32 variable, which forces it to be of float precision. (Note that, as per CodeInChaos' comment, this is not guaranteed by the spec.)
(For completeness the explicit cast looks like:)
mul
conv.r4
conv.i4
I know this issue and alwayes working with it.
As our friend CodeInChaose answer that the floating point will not be presented on memory as its.
But i want to add that you have a reason for the different result, not because the JIT free to use the precision that he want.
The reason is on your first code you did convert the string and save it on memory so on this case its will not be saved 0.1 and some how will be saved 0.0999966 or something like this number.
On your second code you make the conversion and before you save it on memory and before the value is allocated on memory you did the multiplication operation so you will have your correct result without taking the risk of JIT precision of float numbers.
Related
My question is not about floating precision. It is about why Equals() is different from ==.
I understand why .1f + .2f == .3f is false (while .1m + .2m == .3m is true).
I get that == is reference and .Equals() is value comparison. (Edit: I know there is more to this.)
But why is (.1f + .2f).Equals(.3f) true, while (.1d+.2d).Equals(.3d) is still false?
.1f + .2f == .3f; // false
(.1f + .2f).Equals(.3f); // true
(.1d + .2d).Equals(.3d); // false
The question is confusingly worded. Let's break it down into many smaller questions:
Why is it that one tenth plus two tenths does not always equal three tenths in floating point arithmetic?
Let me give you an analogy. Suppose we have a math system where all numbers are rounded off to exactly five decimal places. Suppose you say:
x = 1.00000 / 3.00000;
You would expect x to be 0.33333, right? Because that is the closest number in our system to the real answer. Now suppose you said
y = 2.00000 / 3.00000;
You'd expect y to be 0.66667, right? Because again, that is the closest number in our system to the real answer. 0.66666 is farther from two thirds than 0.66667 is.
Notice that in the first case we rounded down and in the second case we rounded up.
Now when we say
q = x + x + x + x;
r = y + x + x;
s = y + y;
what do we get? If we did exact arithmetic then each of these would obviously be four thirds and they would all be equal. But they are not equal. Even though 1.33333 is the closest number in our system to four thirds, only r has that value.
q is 1.33332 -- because x was a little bit small, every addition accumulated that error and the end result is quite a bit too small. Similarly, s is too big; it is 1.33334, because y was a little bit too big. r gets the right answer because the too-big-ness of y is cancelled out by the too-small-ness of x and the result ends up correct.
Does the number of places of precision have an effect on the magnitude and direction of the error?
Yes; more precision makes the magnitude of the error smaller, but can change whether a calculation accrues a loss or a gain due to the error. For example:
b = 4.00000 / 7.00000;
b would be 0.57143, which rounds up from the true value of 0.571428571... Had we gone to eight places that would be 0.57142857, which has far, far smaller magnitude of error but in the opposite direction; it rounded down.
Because changing the precision can change whether an error is a gain or a loss in each individual calculation, this can change whether a given aggregate calculation's errors reinforce each other or cancel each other out. The net result is that sometimes a lower-precision computation is closer to the "true" result than a higher-precision computation because in the lower-precision computation you get lucky and the errors are in different directions.
We would expect that doing a calculation in higher precision always gives an answer closer to the true answer, but this argument shows otherwise. This explains why sometimes a computation in floats gives the "right" answer but a computation in doubles -- which have twice the precision -- gives the "wrong" answer, correct?
Yes, this is exactly what is happening in your examples, except that instead of five digits of decimal precision we have a certain number of digits of binary precision. Just as one-third cannot be accurately represented in five -- or any finite number -- of decimal digits, 0.1, 0.2 and 0.3 cannot be accurately represented in any finite number of binary digits. Some of those will be rounded up, some of them will be rounded down, and whether or not additions of them increase the error or cancel out the error depends on the specific details of how many binary digits are in each system. That is, changes in precision can change the answer for better or worse. Generally the higher the precision, the closer the answer is to the true answer, but not always.
How can I get accurate decimal arithmetic computations then, if float and double use binary digits?
If you require accurate decimal math then use the decimal type; it uses decimal fractions, not binary fractions. The price you pay is that it is considerably larger and slower. And of course as we've already seen, fractions like one third or four sevenths are not going to be represented accurately. Any fraction that is actually a decimal fraction however will be represented with zero error, up to about 29 significant digits.
OK, I accept that all floating point schemes introduce inaccuracies due to representation error, and that those inaccuracies can sometimes accumulate or cancel each other out based on the number of bits of precision used in the calculation. Do we at least have the guarantee that those inaccuracies will be consistent?
No, you have no such guarantee for floats or doubles. The compiler and the runtime are both permitted to perform floating point calculations in higher precision than is required by the specification. In particular, the compiler and the runtime are permitted to do single-precision (32 bit) arithmetic in 64 bit or 80 bit or 128 bit or whatever bitness greater than 32 they like.
The compiler and the runtime are permitted to do so however they feel like it at the time. They need not be consistent from machine to machine, from run to run, and so on. Since this can only make calculations more accurate this is not considered a bug. It's a feature. A feature that makes it incredibly difficult to write programs that behave predictably, but a feature nevertheless.
So that means that calculations performed at compile time, like the literals 0.1 + 0.2, can give different results than the same calculation performed at runtime with variables?
Yep.
What about comparing the results of 0.1 + 0.2 == 0.3 to (0.1 + 0.2).Equals(0.3)?
Since the first one is computed by the compiler and the second one is computed by the runtime, and I just said that they are permitted to arbitrarily use more precision than required by the specification at their whim, yes, those can give different results. Maybe one of them chooses to do the calculation only in 64 bit precision whereas the other picks 80 bit or 128 bit precision for part or all of the calculation and gets a difference answer.
So hold up a minute here. You're saying not only that 0.1 + 0.2 == 0.3 can be different than (0.1 + 0.2).Equals(0.3). You're saying that 0.1 + 0.2 == 0.3 can be computed to be true or false entirely at the whim of the compiler. It could produce true on Tuesdays and false on Thursdays, it could produce true on one machine and false on another, it could produce both true and false if the expression appeared twice in the same program. This expression can have either value for any reason whatsoever; the compiler is permitted to be completely unreliable here.
Correct.
The way this is usually reported to the C# compiler team is that someone has some expression that produces true when they compile in debug and false when they compile in release mode. That's the most common situation in which this crops up because the debug and release code generation changes register allocation schemes. But the compiler is permitted to do anything it likes with this expression, so long as it chooses true or false. (It cannot, say, produce a compile-time error.)
This is craziness.
Correct.
Who should I blame for this mess?
Not me, that's for darn sure.
Intel decided to make a floating point math chip in which it was far, far more expensive to make consistent results. Small choices in the compiler about what operations to enregister vs what operations to keep on the stack can add up to big differences in results.
How do I ensure consistent results?
Use the decimal type, as I said before. Or do all your math in integers.
I have to use doubles or floats; can I do anything to encourage consistent results?
Yes. If you store any result into any static field, any instance field of a class or array element of type float or double then it is guaranteed to be truncated back to 32 or 64 bit precision. (This guarantee is expressly not made for stores to locals or formal parameters.) Also if you do a runtime cast to (float) or (double) on an expression that is already of that type then the compiler will emit special code that forces the result to truncate as though it had been assigned to a field or array element. (Casts which execute at compile time -- that is, casts on constant expressions -- are not guaranteed to do so.)
To clarify that last point: does the C# language specification make those guarantees?
No. The runtime guarantees that stores into an array or field truncate. The C# specification does not guarantee that an identity cast truncates but the Microsoft implementation has regression tests that ensure that every new version of the compiler has this behaviour.
All the language spec has to say on the subject is that floating point operations may be performed in higher precision at the discretion of the implementation.
When you write
double a = 0.1d;
double b = 0.2d;
double c = 0.3d;
Actually, these are not exactly 0.1, 0.2 and 0.3. From IL code;
IL_0001: ldc.r8 0.10000000000000001
IL_000a: stloc.0
IL_000b: ldc.r8 0.20000000000000001
IL_0014: stloc.1
IL_0015: ldc.r8 0.29999999999999999
There are a lof of question in SO pointing that issue like (Difference between decimal, float and double in .NET? and Dealing with floating point errors in .NET) but I suggest you to read cool article called;
What Every Computer Scientist Should Know About Floating-Point Arithmetic
Well, what leppie said is more logical. The real situation is here, totaly depends on compiler / computer or cpu.
Based on leppie code, this code works on my Visual Studio 2010 and Linqpad, as a result True/False, but when I tried it on ideone.com, the result will be True/True
Check the DEMO.
Tip: When I wrote Console.WriteLine(.1f + .2f == .3f); Resharper warnings me;
Comparison of floating points number with equality operator. Possible
loss of precision while rounding values.
As said in the comments, this is due to the compiler doing constant propagation and performing the calculation at a higher precision (I believe this is CPU dependent).
var f1 = .1f + .2f;
var f2 = .3f;
Console.WriteLine(f1 == f2); // prints true (same as Equals)
Console.WriteLine(.1f+.2f==.3f); // prints false (acts the same as double)
#Caramiriel also points out that .1f+.2f==.3f is emit as false in the IL, hence the compiler did the calculation at compile-time.
To confirm the constant folding/propagation compiler optimization
const float f1 = .1f + .2f;
const float f2 = .3f;
Console.WriteLine(f1 == f2); // prints false
FWIW following test passes
float x = 0.1f + 0.2f;
float result = 0.3f;
bool isTrue = x.Equals(result);
bool isTrue2 = x == result;
Assert.IsTrue(isTrue);
Assert.IsTrue(isTrue2);
So problem is actually with this line
0.1f + 0.2f==0.3f
Which as stated is probably compiler/pc specific
Most people are jumping at this question from wrong angle I think so far
UPDATE:
Another curious test I think
const float f1 = .1f + .2f;
const float f2 = .3f;
Assert.AreEqual(f1, f2); passes
Assert.IsTrue(f1==f2); doesnt pass
Single equality implementation:
public bool Equals(float obj)
{
return ((obj == this) || (IsNaN(obj) && IsNaN(this)));
}
== is about comparing exact floats values.
Equals is a boolean method that may return true or false. The specific implementation may vary.
I don't know why but at this time some results of mine are different from yours. Note the third and fourth test happens to be contrary to the problem, so parts of your explanations might be wrong now.
using System;
class Test
{
static void Main()
{
float a = .1f + .2f;
float b = .3f;
Console.WriteLine(a == b); // true
Console.WriteLine(a.Equals(b)); // true
Console.WriteLine(.1f + .2f == .3f); // true
Console.WriteLine((1f + .2f).Equals(.3f)); //false
Console.WriteLine(.1d + .2d == .3d); //false
Console.WriteLine((1d + .2d).Equals(.3d)); //false
}
}
If I try and convert Decimal.MaxValue from Decimal to Single and back again, the conversion fails with an OverflowException:
Convert.ToDecimal(Convert.ToSingle(Decimal.MaxValue))
// '...' threw an exception of type 'System.OverflowException'
// base: {"Value was either too large or too small for a Decimal."}
What gives? Surely the value of Decimal.MaxValue as a Single should be a valid Decimal value?
I understand that the differences between Single and Decimal and expect a loss of precision converting from Decimal to Single but as Single.MaxValue is greater than Decimal.MaxValue it doesn't make sense that the "Value was either too large or too small for Decimal". If you think that does make sense please explain why in an answer.
Additionally,
Convert.ToSingle(Decimal.MaxValue)
// 7.92281625E+28
so there is no problem converting this number to a Single.
You're making an incorrect assumption:
Surely the value of Decimal.MaxValue as a Single should be a valid
Decimal value?
The value of Decimal.MaxValue is 79,228,162,514,264,337,593,543,950,335. A float can't represent anywhere near that level of precision, so when you convert the decimal value to a float you end up with an approximation.
In this case the float is represented internally as 2^96. This turns out to be a pretty good approximation -- 79,228,162,514,264,337,593,543,950,336 -- but take note of that last digit.
The value of the float is larger than Decimal.MaxValue, which is why your attempt to convert it back into a decimal fails.
(The .NET framework doesn't offer much help when trying to diagnose these kind of problems. It'll always display a pre-rounded value for the float -- 7.92281625E+28 or similar -- and never the full approximation that it's using internally.)
When I run the following code, I get 0 printed on both lines:
Double a = 9.88131291682493E-324;
Double b = a*0.1D;
Console.WriteLine(b);
Console.WriteLine(BitConverter.DoubleToInt64Bits(b));
I would expect to get Double.NaN if an operation result gets out of range. Instead I get 0. It looks that to be able to detect when this happens I have to check:
Before the operation check if any of the operands is zero
After the operation, if neither of operands were zero, check if the result is zero. If not let it run. If it is zero, assign Double.NaN to it instead to indicate that it's not really a zero, it's just a result that can't be represented within this variable.
That's rather unwieldy. Is there a better way? What Double.NaN is designed for? I'm assuming some operations must have return it, surely designers did not put it there just in case? Is it possible that this is a bug in BCL? (I know unlikely, but, that's why I'd like to understand how that Double.NaN is supposed to work)
Update
By the way, this problem is not specific for double. decimal exposes it all the same:
Decimal a = 0.0000000000000000000000000001m;
Decimal b = a* 0.1m;
Console.WriteLine(b);
That also gives zero.
In my case I need double, because I need the range they provide (I'm working on probabilistic calculations) and I'm not that worried about precision.
What I need though is to be able to detect when my results stop mean anything, that is when calculations drop the value so low, that it can no longer be presented by double.
Is there a practical way of detecting this?
Double works exactly according to the floating point numbers specification, IEEE 754. So no, it's not an error in BCL - it's just the way IEEE 754 floating points work.
The reason, of course, is that it's not what floats are designed for at all. Instead, you might want to use decimal, which is a precise decimal number, unlike float/double.
There's a few special values in floating point numbers, with different meanings:
Infinity - e.g. 1f / 0f.
-Infinity - e.g. -1f / 0f.
NaN - e.g. 0f / 0f or Math.Sqrt(-1)
However, as the commenters below noted, while decimal does in fact check for overflows, coming too close to zero is not considered an overflow, just like with floating point numbers. So if you really need to check for this, you will have to make your own * and / methods. With decimal numbers, you shouldn't really care, though.
If you need this kind of precision for multiplication and division (that is, you want your divisions to be reversible by multiplication), you should probably use rational numbers instead - two integers (big integers if necessary). And use a checked context - that will produce an exception on overflow.
IEEE 754 in fact does handle underflow. There's two problems:
The return value is 0 (or -1 for negative undreflow). The exception flag for underflow is set, but there's no way to get that in .NET.
This only occurs for the loss of precision when you get too close to zero. But you lost most of your precision way long before that. Whatever "precise" number you had is long gone - the operations are not reversible, and they are not precise.
So if you really do care about reversibility etc., stick to rational numbers. Neither decimal nor double will work, C# or not. If you're not that precise, you shouldn't care about underflows anyway - just pick the lowest reasonable number, and declare anything under that as "invalid"; may sure you're far away from the actual maximum precision - double.Epsilon will not help, obviously.
All you need is epsilon.
This is a "small number" which is small enough so you're no longer interested in.
You could use:
double epsilon = 1E-50;
and whenever one of your factors gets smaller than epislon you take action (for example treat it like 0.0)
I have the following code which I can't change...
public static decimal Convert(decimal value, Measurement currentMeasurement, Measurement targetMeasurement, bool roundResult = true)
{
double result = Convert(System.Convert.ToDouble(value), currentMeasurement, targetMeasurement, roundResult);
return System.Convert.ToDecimal(result);
}
now result is returned as -23.333333333333336 but once the conversion to a decimal takes place it becomes -23.3333333333333M.
I thought decimals could hold bigger values and were hence more accurate so how am I losing data going from double to decimal?
This is by design. Quoting from the documentation of Convert.ToDecimal:
The Decimal value returned by this method contains a maximum of
15 significant digits. If the value parameter contains more than 15
significant digits, it is rounded using rounding to nearest. The
following example illustrates how the Convert.ToDecimal(Double) method
uses rounding to nearest to return a Decimal value with 15
significant digits.
Console.WriteLine(Convert.ToDecimal(123456789012345500.12D)); // Displays 123456789012346000
Console.WriteLine(Convert.ToDecimal(123456789012346500.12D)); // Displays 123456789012346000
Console.WriteLine(Convert.ToDecimal(10030.12345678905D)); // Displays 10030.123456789
Console.WriteLine(Convert.ToDecimal(10030.12345678915D)); // Displays 10030.1234567892
The reason for this is mostly that double can only guarantee 15 decimal digits of precision, anyway. Everything that's displayed after them (converted to a string it's 17 digits because that's what double uses internally and because that's the number you might need to exactly reconstruct every possible double value from a string) is not guaranteed to be part of the exact value being represented. So Convert takes the only sensible route and rounds them away. After all, if you have a type that can represent decimal values exactly, you wouldn't want to start with digits that are inaccurate.
So you're not losing data, per se. In fact, you're only losing garbage. Garbage you thought of being data.
EDIT: To clarify my point from the comments: Conversions between different numeric data types may incur a loss of precision. This is especially the case between double and decimal because both types are capable of representing values the other type cannot represent. Furthermore, both double and decimal have fairly specific use cases they're intended for, which is also evident from the documentation (emphasis mine):
The Double value type represents a double-precision 64-bit number with
values ranging from negative 1.79769313486232e308 to positive
1.79769313486232e308, as well as positive or negative zero, PositiveInfinity, NegativeInfinity, and not a number (NaN). It is
intended to represent values that are extremely large (such as
distances between planets or galaxies) or extremely small (the
molecular mass of a substance in kilograms) and that often are
imprecise (such as the distance from earth to another solar system).
The Double type complies with the IEC 60559:1989 (IEEE 754) standard
for binary floating-point arithmetic.
The Decimal value type represents decimal numbers ranging from
positive 79,228,162,514,264,337,593,543,950,335 to negative
79,228,162,514,264,337,593,543,950,335. The Decimal value type is
appropriate for financial calculations that require large numbers of
significant integral and fractional digits and no round-off errors.
The Decimal type does not eliminate the need for rounding. Rather, it
minimizes errors due to rounding.
This basically means that for quantities that will not grow unreasonably large and you need an accurate decimal representation, you should use decimal (sounds fairly obvious when written that way). In practice this most often means financial calculations, as the documentation already states.
On the other hand, in the vast majority of other cases, double is the right way to go and usually does not hurt as a default choice (languages like Lua and JavaScript get away just fine with double being the only numeric data type).
In your specific case, since you mentioned in the comments that those are temperature readings, it is very, very, very simple: Just use double throughout. You have temperature readings. Wikipedia suggests that highly-specialized thermometers reach around 10−3 °C precision. Which basically means that the differences in value of around 10−13 (!) you are worried about here are simply irrelevant. Your thermometer gives you (let's be generous) five accurate digits here and you worry about the ten digits of random garbage that come after that. Just don't.
I'm sure a physicist or other scientist might be able to chime in here with proper handling of measurements and their precision, but we were taught in school that it's utter bullshit to even give values more precise than the measurements are. And calculations potentially affect (and reduce) that precision.
After further investigation, it all boils down to this:
(decimal)((object)my_4_decimal_place_double_value_20.9032)
after casting twice, it becomes 20.903199999999998
I have a double value, which is rounded to just 4 decimal points via Math.Round(...) the value is 20.9032
In my dev environment, it is displayed as is.
But in released environment, it is displayed as 20.903199999999998
There were no operation after Math.Round(...) but the value has been copied around and assigned.
How can this happen?
Updates:
Data is not loaded from a DB.
returned value from Math.Round() is assigned to the original double varible.
Release and dev are the same architecture, if this information helps.
According to the CLR ECMA specification:
Storage locations for floating-point numbers (statics, array elements,
and fields of classes) are of fixed size. The supported storage sizes
are float32 and float64. Everywhere else (on the evaluation stack, as
arguments, as return types, and as local variables) floating-point
numbers are represented using an internal floating-point type. In each
such instance, the nominal type of the variable or expression is
either R4 or R8, but its value can be represented internally with
additional range and/or precision. The size of the internal
floating-point representation is implementation-dependent, can vary,
and shall have precision at least as great as that of the variable or
expression being represented. An implicit widening conversion to the
internal representation from float32 or float64 is performed when
those types are loaded from storage. The internal representation is
typically the native size for the hardware, or as required for
efficient implementation of an operation.
To translate, the IL generated will be the same (except that debug mode inserts nops in places to ensure a breakpoint is possible, it may also deliberately maintain a temporary variable that release mode deems unnecessary.)... but the JITter is less aggressive when dealing with an assembly marked as debug. Release builds tend to move more floating values into 80-bit registers; debug builds tend to read direct from 64-bit memory storage.
If you want a "precise" float number printing, use string.Substring(...) instead of Math.Round
A IEEE754 double precision floating point number can not represent 20.9032.
The most accurate representation is 2.09031999999999982264853315428E1 and that is what you see in your output.
Do not format numbers with round instead use the string format of the double.ToString(string formatString) Method.
See msdn documentation of Double.ToString Method (String)
The difference between Release and Debug build may be some optimization that gets done for the release build, but this is way to detailed in my opinion.
In my opinion the core issue is that you try to format a text output with a mathematical Operation. I'm sorry but i don't know what in detail creates the different behavior.