Convert float -> double -> float - c#

Is it possible to convert float to double, then back without losing precision? I mean first float should be exaclty bit by bit same like result float.

Yes, and we can test it:
float fl = float.NegativeInfinity;
long cycles = 0;
while (true)
{
double dbl = fl;
float fl2 = (float)dbl;
int flToInt1 = new Ieee754.Int32SingleConverter { Single = fl }.Int32;
int flToInt2 = new Ieee754.Int32SingleConverter { Single = fl2 }.Int32;
if (flToInt1 != flToInt2)
{
Console.WriteLine("\nDifferent: {0} (Int32: {1}, {2})", fl, flToInt1, flToInt2);
}
if (fl == 0)
{
Console.WriteLine("\n0, Sign: {0}", flToInt1 < 0 ? "-" : "+");
}
if (fl == float.PositiveInfinity)
{
fl = float.NaN;
}
else if (float.IsNaN(fl))
{
break;
}
else
{
fl = Ieee754.NextSingle(fl);
}
cycles++;
if (cycles % 100000000 == 0)
{
Console.Write(".");
}
}
Console.WriteLine("\nDone");
Console.ReadKey();
and the utility classes:
public static class Ieee754
{
[StructLayout(LayoutKind.Explicit)]
public struct Int32SingleConverter
{
[FieldOffset(0)]
public int Int32;
[FieldOffset(0)]
public float Single;
}
public static float NextSingle(float value)
{
int bits = new Int32SingleConverter { Single = value }.Int32;
if (bits >= 0)
{
bits++;
}
else if (bits != int.MinValue)
{
bits--;
}
else
{
bits = 0;
}
return new Int32SingleConverter { Int32 = bits }.Single;
}
}
On my computer, in Release Mode, without the debugger (Ctrl+F5 from Visual Studio), it is around 2 minutes.
There are around 4 billion different float values. I cast them around and convert them to int to binary check them. Note that NaN values are "particular". The IEEE754 standard has multiple values for NaN, but .NET "compresses" them to a single NaN value. So you could create a NaN value (manually, through bit manipulation) that wouldn't be converted back and forth correctly. The "standard" NaN values is converted correctly, so are PositiveInfinity and NegativeInfinity, +0 and -0.

Yes, as every float can be exactly represented as a double, the round trip will give you the exact value that you started with.
There is one possible technical exception to your requirement that they are bit-by-bit the same: there are multiple bit patterns that correspond to NaN values (this is often referred to as the "NaN payload"). As far as I know, there is no strict requirement that this be preserved: you will still get a NaN, just maybe a slightly different one.

Related

Return the next whole number

I want to pass a number and have the next whole number returned,
I've tried Math.Ceiling(3) , but it returns 3.
Desired output :
double val = 9.1 => 10
double val = 3 => 4
Thanks
There are two ways I would suggest doing this:
Using Math.Floor():
return Math.Floor(input + 1);
Using casting (to lose precision)
return (int)input + 1;
Fiddle here
Using just the floor or ceiling wont give you the next whole number in every case.
For eg:- If you input negative numbers. Better way is to create a function that does that.
public class Test{
public int NextWholeNumber(double n)
{
if(n < 0)
return 0;
else
return Convert.ToInt32(Math.Floor(n)+1);
}
// Main method
static public void Main()
{
Test o = new Test();
Console.WriteLine(o.NextWholeNumber(1.254));
}
}
Usually when you refer to whole number it is positive integers only. But if you require negative integers as well then you can try this, the code will return 3.0 => 4, -1.0 => 0, -1.1 => -1
double doubleValue = double.Parse(Console.ReadLine());
int wholeNumber = 0;
if ((doubleValue - Math.Floor(doubleValue) > 0))
{
wholeNumber = int.Parse(Math.Ceiling(doubleValue).ToString());
}
else
{
wholeNumber = int.Parse((doubleValue + 1).ToString());
}

Do Not Check Floating Point Equality/Inequality

We are using one code analyzer which has a rule like this "Do Not Check Floating Point Equality/Inequality".Below is the example given.
float f = 0.100000001f; // 0.1
double d = 0.10000000000000001; // 0.1
float myNumber = 3.146f;
if ( myNumber == 3.146f ) //Noncompliant. Because of floating point imprecision, this will be false
{
////
}
else
{
////
}
if (myNumber <= 3.146f && mNumber >= 3.146f) // Noncompliant indirect equality test
{
// ...
}
if (myNumber < 4 || myNumber > 4) // Noncompliant indirect inequality test
{
// ...
}
when I tested this code if ( myNumber == 3.146f ) is true so I am not able to understand what this rule is trying to say.
What is solution or code change required for this rule?
Is this rule applicable for C#? When I googled I see more examples of C/C++ for this rule
Floating point is not precise. In some cases, the result is unexpected, so it's bad practice to compare floating point number for equality without some tolerance.
It can be demonstrated with simple example.
if(0.1 + 0.2 == 0.3)
{
Console.WriteLine("Equal");
}
else
{
Console.WriteLine("Not Equal");
}
It will print Not Equal.
Demo: https://dotnetfiddle.net/ltAFWe
The solution is to add some tolerance, for example:
if(Math.Abs((0.1 + 0.2) - 0.3) < 0.0001)
{
Console.WriteLine("Equal");
}
else
{
Console.WriteLine("Not Equal");
}
Now it will print Equal.
A fairly readable solution to this is to define an extension method for double like so:
public static class FloatAndDoubleExt
{
public static bool IsApproximately(this double self, double other, double within)
{
return Math.Abs(self - other) <= within;
}
public static bool IsApproximately(this float self, float other, float within)
{
return Math.Abs(self - other) <= within;
}
}
Then use it like so:
float myNumber = 3.146f;
if (myNumber.IsApproximately(3.146f, within:0.001f))
{
////
}
else
{
////
}
Also see the documentation for Double.Equals() for more information.

Int64 as result from double times 10^x?

I have a variable representing a quantity in some given unit:
enum Unit
{
Single,
Thousand,
Million,
Billion,
Trillion
}
public class Quantity()
{
public double number;
public Unit numberUnit;
public Int64 GetNumberInSingleUnits()
{
// ???
}
}
For example, imagine
var GDP_Of_America = new Quantiy { number = 16.66, numberUnit = Unit.Trillion };
Int64 gdp = GDP_Of_America.GetNumberinSingleUnits(); // should return 16,660,000,000,000
My question is basically - how can I implement the "GetNumberInSingleUnits" function?
I can't just multiply with some UInt64 factor, e.g.
double num = 0.5;
UInt64 factor = 1000000000000;
var result = num * factor; // won't work! results in double
As the regular numeric operations reslt in a double, but the result may be larger than the range of valid doubles.
How could I do this conversion?
ps, I know the class "Quantity" is not a great way to store information - but this is bound by the input data of my application, which is in non-single (e.g. millions, billions etc) units.
Like I said, decimals can help you here:
public enum Unit
{
Micro = -6, Milli = -3, Centi = -2, Deci = -1,
One /* Don't really like this name */, Deca, Hecto, Kilo, Mega = 6, Giga = 9
}
public struct Quantity
{
public decimal Value { get; private set; }
public Unit Unit { get; private set; }
public Quantity(decimal value, Unit unit) :
this()
{
Value = value;
Unit = unit;
}
public decimal OneValue /* This one either */
{
get
{
return Value * (decimal)Math.Pow(10, (int)Unit);
}
}
}
With decimals you won't lose a lot of precision until after you decide to convert them to long (and beware of over/underflows).
Anton's answer seems like a good solution.
Just for the sake of discussion, another potential way.
I don't like this one at all as it seems very messy; However I think this might avoid imprecisions, if these ever turned out to be an issue with decimals.
public Int64 GetAsInt64(double number)
{
// Returns 1 for single units, 3 for thousands, 6 for millions, etc.
uint e = GetFactorExponent();
// Converts to scientific notation
// e.g. number = -1.2345, unit millions to "-1.2345e6"
string str = String.Format("{0:#,0.###########################}", number) + "e" + e;
// Parses scientific notation into Int64
Int64 result = Int64.Parse(str, NumberStyles.AllowLeadingSign | NumberStyles.AllowDecimalPoint | NumberStyles.AllowExponent | NumberStyles.AllowThousands);
return result;
}

Optimizing batch size based on elapsed time between successive calls

I've started playing around with an attempt to create the following:
public static IEnumerable<List<T>> OptimizedBatches<T>(this IEnumerable<T> items)
Then the client of this extension method would use it like this:
foreach (var list in extracter.EnumerateAll().OptimizedBatches())
{
// at some unknown batch size, process time starts to
// increase at an exponential rate
}
Here's an example:
batch length time
1 100ms
2 102ms
4 110ms
8 111ms
16 118ms
32 119ms
64 134ms
128 500ms <-- doubled length but time it took more than doubled
256 1100ms <-- oh no!!
From the above, the best batch length is 64 because 64/134 is the best ratio of length/time.
So the question is what algorithm to use to automatically pick the optimal batch length based on the successive times between iterator steps?
Here's what I have so far - it's not done yet...
class LengthOptimizer
{
private Stopwatch sw;
private int length = 1;
private List<RateRecord> rateRecords = new List<RateRecord>();
public int Length
{
get
{
if (sw == null)
{
length = 1;
sw = new Stopwatch();
}
else
{
sw.Stop();
rateRecords.Add(new RateRecord { Length = length, ElapsedMilliseconds = sw.ElapsedMilliseconds });
length = rateRecords.OrderByDescending(c => c.Rate).First().Length;
}
sw.Start();
return length;
}
}
}
struct RateRecord
{
public int Length { get; set; }
public long ElapsedMilliseconds { get; set; }
public float Rate { get { return ((float)Length) / ElapsedMilliseconds; } }
}
The main problem I see here is creating the "optimity scale", that is, why do you consider that 32 -> 119ms is acceptable and 256 -> 1100ms is not; or why certain configuration is better than other one.
Once this is done, the algorithm will be straightforward: just returning the ranking values for each input conditions and making decisions based on "which one gets a higher value".
The first step for creating this scale is finding out the variable which better describes the ideal behaviour you are looking for. A simple first approach: length/time. That is, from your inputs:
batch length time ratio1
1 100ms 0.01
2 102ms 0.019
4 110ms 0.036
8 111ms 0.072
16 118ms 0.136
32 119ms 0.269
64 134ms 0.478
128 500ms 0.256
256 1100ms 0.233
The bigger is ratio1, the better. Logically, it is not the same having 0.269 with 32 length than 0.256 with 128 and thus more information has to be accounted for.
You might create a more complex ranking ratio weighting the two involved variables better (e.g., trying different exponents). But I think that the best approach for this problem is creating a system of "zones" and calculating a generic ranking from it. Example:
Zone 1 -> length from 1 to 8; ideal ratio for this zone = 0.1
Zone 2 -> length from 9 to 32; ideal ratio for this zone = 0.3
Zone 3 -> length from 33 to 64; ideal ratio for this zone = 0.45
Zone 4 -> length from 65 to 256; ideal ratio for this zone = 0.35
The ranking associated to each configuration will be the result of putting the given ratio1 with respect to the ideal value for the given zone.
2 102ms 0.019 -> (zone 1) 0.019/0.1 = 0.19 (or 1.9 in a 0-10 scale)
16 118ms 0.136 -> (zone 2) 0.136/0.3 = 0.45 (or 4.5 in a 0-10 scale)
etc.
These values might be compared and thus you would automatically know that the second case is much better than the first one.
This is just a simple example but I guess that provides a good enough insight into what is the real problem here: setting up an accurate ranking allowing to perfectly identify which configuration is better.
I would go with a ranking approach like varocarbas suggested.
Here is an initial implementation to get you started:
public sealed class DataFlowOptimizer<T>
{
private readonly IEnumerable<T> _collection;
private RateRecord bestRate = RateRecord.Default;
private uint batchLength = 1;
private struct RateRecord
{
public static RateRecord Default = new RateRecord { Length = 1, ElapsedTicks = 0 };
private float _rate;
public int Length { get; set; }
public long ElapsedTicks { get; set; }
public float Rate
{
get
{
if(_rate == default(float) && ElapsedTicks > 0)
{
_rate = ((float)Length) / ElapsedTicks;
}
return _rate;
}
}
}
public DataFlowOptimizer(IEnumerable<T> collection)
{
_collection = collection;
}
public int BatchLength { get { return (int)batchLength; } }
public float Rate { get { return bestRate.Rate; } }
public IEnumerable<IList<T>> GetBatch()
{
var stopwatch = new Stopwatch();
var batch = new List<T>();
var benchmarks = new List<RateRecord>(5);
IEnumerator<T> enumerator = null;
try
{
enumerator = _collection.GetEnumerator();
uint count = 0;
stopwatch.Start();
while(enumerator.MoveNext())
{
if(count == batchLength)
{
benchmarks.Add(new RateRecord { Length = BatchLength, ElapsedTicks = stopwatch.ElapsedTicks });
var currentBatch = batch.ToList();
batch.Clear();
if(benchmarks.Count == 10)
{
var currentRate = benchmarks.Average(x => x.Rate);
if(currentRate > bestRate.Rate)
{
bestRate = new RateRecord { Length = BatchLength, ElapsedTicks = (long)benchmarks.Average(x => x.ElapsedTicks) };
batchLength = NextPowerOf2(batchLength);
}
// Set margin of error at 10%
else if((bestRate.Rate * .9) > currentRate)
{
// Shift the current length and make sure it's >= 1
var currentPowOf2 = ((batchLength >> 1) | 1);
batchLength = PreviousPowerOf2(currentPowOf2);
}
benchmarks.Clear();
}
count = 0;
stopwatch.Restart();
yield return currentBatch;
}
batch.Add(enumerator.Current);
count++;
}
}
finally
{
if(enumerator != null)
enumerator.Dispose();
}
stopwatch.Stop();
}
uint PreviousPowerOf2(uint x)
{
x |= (x >> 1);
x |= (x >> 2);
x |= (x >> 4);
x |= (x >> 8);
x |= (x >> 16);
return x - (x >> 1);
}
uint NextPowerOf2(uint x)
{
x |= (x >> 1);
x |= (x >> 2);
x |= (x >> 4);
x |= (x >> 8);
x |= (x >> 16);
return (x+1);
}
}
Sample program in LinqPad:
public IEnumerable<int> GetData()
{
return Enumerable.Range(0, 100000000);
}
void Main()
{
var optimizer = new DataFlowOptimizer<int>(GetData());
foreach(var batch in optimizer.GetBatch())
{
string.Format("Length: {0} Rate {1}", optimizer.BatchLength, optimizer.Rate).Dump();
}
}
Describe an objective function f that maps a batch size s and runtime t(s) to a score f(s,t(s))
Try lots of s values and evaluate f(s,t(s)) for each one
Choose the s value that maximizes f

C# Compiler "Optimize code" : disable on a code fragment only

I have a C# code which is working good when the "optimize code" option is off, but fails otherwise. Is there any function or class attribute which can prevent the optimisation of a function or class, but let the compiler optimize the others ?
(I tried unsafe or MethodImpl, but without success)
Thanks
Edit :
I have done some more test...
The code is like this :
double arg = (Math.PI / 2d - Math.Atan2(a, d));
With a = 1 and d = 0, arg should be 0.
Thid code is a function which is called by Excel via ExcelDNA.
Calling an identical code from an optimized console app : OK
Calling this code from Excel without optimization : OK
Calling this code from Excel with optimization : Not OK, arg == 0 is false (instead arg is a very small value near 0, but not 0)
Same result with [MethodImpl(MethodImplOptions.NoOptimization)] before the called function.
This is very likely to do with the floating point mode which Excel likely has set - meaning that your program is calculating floating points slightly different because of the program (Excel) hosting your assembly (DLL). This might impact how your results are calculated, or how/what values are automatically coerced to zero.
To be absolutely sure you are not going to run into issues with different floating point modes and/or errors you should check for equality rather by checking if the values are very close together. This is not really a hack.
public class AlmostDoubleComparer : IComparer<double>
{
public static readonly AlmostDoubleComparer Default = new AlmostDoubleComparer();
public const double Epsilon = double.Epsilon * 64d; // 0.{322 zeroes}316
public static bool IsZero(double x)
{
return Compare(x, 0) == 0;
}
public static int Compare(double x, double y)
{
// Very important that cmp(x, y) == cmp(y, x)
if (Double.IsNaN(x) || Double.IsNaN(y))
return 1;
if (Double.IsInfinity(x) || Double.IsInfinity(y))
return 1;
var absX = Math.Abs(x);
var absY = Math.Abs(y);
var diff = absX > absY ? absX - absY : absY - absX;
if (diff < Epsilon)
return 0;
if (x < y)
return -1;
else
return 1;
}
int IComparer<double>.Compare(double x, double y)
{
return Compare(x, y);
}
}
// E.g.
double arg = (Math.PI / 2d - Math.Atan2(a, d));
if (AlmostDoubleComparer.IsZero(arg))
// Regard it as zero.
I also ported the re-interpret integer comparison, in case you find that more suitable (it deals with larger values more consistently).
public class AlmostDoubleComparer : IComparer<double>
{
public static readonly AlmostDoubleComparer Default = new AlmostDoubleComparer();
public const double MaxUnitsInTheLastPlace = 3;
public static bool IsZero(double x)
{
return Compare(x, 0) == 0;
}
public static int Compare(double x, double y)
{
// Very important that cmp(x, y) == cmp(y, x)
if (Double.IsNaN(x) || Double.IsNaN(y))
return 1;
if (Double.IsInfinity(x) || Double.IsInfinity(y))
return 1;
var ix = DoubleInt64.Reinterpret(x);
var iy = DoubleInt64.Reinterpret(y);
var diff = Math.Abs(ix - iy);
if (diff < MaxUnitsInTheLastPlace)
return 0;
if (ix < iy)
return -1;
else
return 1;
}
int IComparer<double>.Compare(double x, double y)
{
return Compare(x, y);
}
}
[StructLayout(LayoutKind.Explicit)]
public struct DoubleInt64
{
[FieldOffset(0)]
private double _double;
[FieldOffset(0)]
private long _int64;
private DoubleInt64(long value)
{
_double = 0d;
_int64 = value;
}
private DoubleInt64(double value)
{
_int64 = 0;
_double = value;
}
public static double Reinterpret(long value)
{
return new DoubleInt64(value)._double;
}
public static long Reinterpret(double value)
{
return new DoubleInt64(value)._int64;
}
}
Alternatively you could try and NGen the assembly and see if you can work around the either the mode Excel has, or how it is hosting the CLR.
That is what you get when working with floating point datatypes. You don't get exactly 0, but a very close value, since a double has limited precision and not every value can be represented and sometimes those tiny precision errors add up. You either need to expect that (check that the value is close enough to 0).

Categories

Resources