Treat all numeric literals as doubles - c#

Is there any way in C# to treat all numeric literals (which I describe as "magic numbers") as doubles?
For example
double number = 1;
var a = 7 / 8 * number;
In this calculation 7 / 8 returns 0, but 7.0 / 8.0 returns 0.875.
In my case most of these formulas are copied from VBA and they are all over the place. It would be very time consuming and error prone to find all of them and replace them manually.

There is no global setting, code must be updated. The method I used is posted in the answer of this question.
Visual Studio replace magic number integers with doubles

Related

c# How to print a double in a custom scientific notation

I am trying to build a custom format specified for doubles for a two line element (tle) for space objects. From the wiki documentation TLEs
Where decimal points are assumed, they are leading decimal points. The last two symbols in Fields 10 and 11 of the first line give powers of 10 to apply to the preceding decimal. Thus, for example, Field 11 (-11606-4) translates to −0.11606E−4 (−0.11606×10−4).
This field is 8 characters long. First character is +/-/' ' followed by 5 numeric values (No zero padding) followed by a '-' and a single exponent value.
Does anyone know how to build this inline? ie $"{val,someFormat}" This would be preferred however I don't think it is possible so the alternative would be composing it of several pieces like
$"{val<0?"-":" "}{frac(val)}-{getExp(val)}".
Both frac() and getExp() need to be built, but my biggest problem is how to get the exponential value of the double. Is there any built in function that will return an int value of the exponent of a double? With that I think I can build everything else.
Again if there is an easier way I am all ears!
Thanks

Precision of Math.Cos() for a large integer

I'm trying to compute the cosine of 4203708359 radians in C#:
var x = (double)4203708359;
var c = Math.Cos(x);
(4203708359 can be exactly represented in double precision.)
I'm getting
c = -0.57977754519440394
Windows' calculator gives
c = -0.579777545198813380788467070278
PHP's cos(double) function (which internally just uses cos(double) from the C standard library) on Linux gives:
c = -0.57977754519881
C's cos(double) function in a simple C program compiled with Visual Studio 2017 gives
c = -0.57977754519881342
Here is the definition of Math.cos() in C#: https://github.com/dotnet/coreclr/blob/master/src/mscorlib/src/System/Math.cs#L57-L58
It appears to be a built-in function. I didn't dig (yet) in the C# compiler to check what this effectively compiles to but this is probably the next step.
In the meantime:
Why is the precision so poor in my C# example, and what can I do about it?
Is it simply that the cosine implementation in the C# compiler deals poorly with large integer inputs?
Edit 1: Wolfram Mathematica 11.0:
In[1] := N[Cos[4203708359], 50]
Out[1] := -0.57977754519881338078846707027800171954257546099993
Edit 2: I do need that level precision, and I'm ready to go pretty far in order to obtain it. I'd be happy to use an arbitrary precision library if there exists a good one that supports cosine (my efforts haven't led to one so far).
Edit 3: I posted the question on coreclr's issue tracker: https://github.com/dotnet/coreclr/issues/12737
I think I might know the answer. I'm pretty sure the sin/cos libraries don't take arbitrarily large numbers and calculate the sin/cos of them - they instead reduce them down to low numbers (between 0-2xpi?) and calculate them there. I mean, cos(x) = cos(x + 2xpi) = cos(x + 4xpi) = ...
Problem is, how is the program supposed to reduce your 10-digit number down? Realistically, it should figure out how many times it needs to multiply (2xpi) to get a value just below your number, then subtract that out. In your case, that's about 670 million.
So it's multiplying (2xpi) by this 9 digit value - so it's effectively losing 9 digits worth of significance from the math library's version of pi.
I ended up writing a little function to test what was going on:
private double reduceDown(double start)
{
decimal startDec = (decimal)start;
decimal pi = decimal.Parse("3.1415926535897932384626433832795");
decimal tau = pi * 2;
int num = (int)(startDec / tau);
decimal x = startDec - (num * tau);
double retVal;
double.TryParse(x.ToString(), out retVal);
return retVal;
//return start - (num * tau);
}
All this is doing is using decimal data type as a way of reducing down the value without losing digits of precision from pi - it still returns back a double. When I call it with a modification of your code:
var x = (double)4203708359;
var c = Math.Cos(x);
double y = reduceDown(x);
double c2 = Math.Cos(y);
MessageBox.Show(c.ToString() + Environment.NewLine + c2);
return;
... sure enough, the second one is accurate.
So my advice is - if you really need radians that high, and you really need the accuracy? Do something like that function above, and reduce the number down on your end in a way that you don't lose digits of precision.
Presumably, the salts are stored along with each password. You could use the PHP code to calculate that cosine, and store that also with the password. I would then also add a password version number and default all those older passwords to be version 1. Then, in your C# code, for any new passwords, you implement a new hashing algorithm, and store those password hashes as passwords version 2. For any version 1 passwords, to authenticate, you do not have to calculate the cosine, you simply use the one stored along with the password hash and the salt.
The programmer of that PHP code was probably wanting to do a clever version of pepper. By storing that cosine, or pepper along with the salt and the password hashes, you basically change that pepper into a salt2. So, another versionless way of doing this would be to use two salts in your C# hashing code. For new passwords you could leave the second salt blank or assign it some other way. For old passwords, it would be that cosine, but it is already calculated.
Regarding this part of my question: "Why is the precision so poor in my C# example", coreclr developers answered here: https://github.com/dotnet/coreclr/issues/12737
In a nutshell, .NET Framework 4.6.2 (x86 and x64) and .NET Core (x86) appear to use Intel's x87 FP unit (i.e. fcos or fsincos) that gives inaccurate results while .NET Core on x64 (and PHP, Visual Studio 2017 and gcc) use more accurate, presumably SSE2-based implementations that give correctly rounded results.

.net decimal - remove scale, solution that is guaranteed to work

I want to convert a decimal a with scale > 0 to its equivalent decimal b with scale 0 (suppose that there is an equivalent decimal without losing precision). Success is defined by having b.ToString() return a string without any trailing zeroes or by extracting the scale via GetBits and confirming that it is 0.
Easy options I found:
Decimal scale2 = new Decimal(100, 0, 0, false, 2);
string scale2AsString = scale2.ToString(System.Globalization.CultureInfo.InvariantCulture);
// toString includes trailing zeroes
Assert.IsTrue(scale2AsString.Equals("1.00"));
// can use format specifier to specify format
string scale2Formatted = scale2.ToString("G0");
Assert.IsTrue(scale2Formatted.Equals("1"));
// but what if we want to pass the decimal to third party code that does not use format specifiers?
// option 1, use Decimal.Truncate or Math.Truncate (Math.Truncate calls Decimal.Truncate, I believe)
Decimal truncated = Decimal.Truncate(scale2);
string truncatedAsString = truncated.ToString(System.Globalization.CultureInfo.InvariantCulture);
Assert.IsTrue(truncatedAsString.Equals("1"));
// option 2, division trick
Decimal divided = scale2 / 1.000000000000000000000000000000000m;
string dividedAsString = divided.ToString(System.Globalization.CultureInfo.InvariantCulture);
Assert.IsTrue(dividedAsString.Equals("1"));
// option 3, if we expect the decimal to fit in int64, convert to int64 and back
Int64 asInt64 = Decimal.ToInt64(scale2);
Decimal backToDecimal = new Decimal(asInt64);
string backToDecimalString = backToDecimal.ToString(System.Globalization.CultureInfo.InvariantCulture);
Assert.IsTrue(backToDecimalString.Equals("1"));
// option 4, convert to BigInteger then back using BigInteger's explicit conversion to decimal
BigInteger convertedToBigInteger = new BigInteger(scale2);
Decimal bigIntegerBackToDecimal = (Decimal)convertedToBigInteger;
string bigIntegerBackToDecimalString = bigIntegerBackToDecimal.ToString(System.Globalization.CultureInfo.InvariantCulture);
Assert.IsTrue(bigIntegerBackToDecimalString.Equals("1"));
So plenty of options, and certainly there are more. But which of these options are actually guaranteed to work?
Option 1: MSDN does not mention that the scale is changed when calling Truncate, so using this method seems to be relying on an implementation detail. Internally, Truncate calls FCallTruncate, for which I did not find any documentation.
Option 2 may be mandated by the CLI spec, but I did not find which exact specification that would be, I did not find it in the ECMA specification.
Option 3 (ToInt64 also uses FCallTruncate internally) will work judging by the reference source (the constructor taking an ulong sets flags and this scale to 0) but the documentation again makes no mention of scale.
Option 4, BigInteger calls Decimal.Truncate, with the comment:
// First truncate to get scale to 0 and extract bits
int[] bits = Decimal.GetBits(Decimal.Truncate(value));
So clearly Microsoft internally also thinks that Decimal.Truncate will set the scale to 0.
But I am looking for a method that is guaranteed to work without relying on implementation details and works for all the decimal where this can technically work (cannot rescale a Decimal.MaxValue for example). None of the options above seems to fit the bill for this requirement.
You pretty much answered your question yourself. Personally, I would not obsess so much about which method to use. If you method works now - even if it is undocumented - then it will most likely work in the future. And if a future update to .NET breaks your method then hopefully you have a test that will highlight this when you upgrade your application framework.
Going through your options:
1) Decimal.Truncate sets the scale to 0 but if it is undocumented then you may decide to not rely on this fact.
2) Dividing by 1.0000 ... may give you the desired result but it is not obvious what is going and if it is not documented then this is probably the worst option.
3 and 4) These options should work for you. You convert the decimal to an integer and then back to a decimal. Obviously, the decimal created from an integer has scale 0. Any other value would be wrong even though it is not explicitly documented. Option 4) is able to handle even Decimal.MaxValue and Decimal.MinValue.

Can I declare constant integers with a thousands separator in C#?

The Cobra programming language has a useful feature where you can use underscores in numeric literals to improve readability. For example, the following are equivalent, but the second line is easier to read:
x = 1000000
x = 1_000_000 # obviously 1 million
Is there anything equivalent for C#?
Answer as of C# 7
Yes, this is supported in C# 7. But be aware that there's no validation that you've put the underscores in the right place:
// At a glance, this may look like a billion, but we accidentally missed a 0.
int x = 1_00_000_000;
Answer from 2011
No, there's nothing like that in C#. You could do:
const int x = 1000 * 1000;
but that's about as nice as it gets.
(Note that this enhancement went into Java 7 as well... maybe one day it will be introduced in C#.)
Yes you can do this with C # 7.0 as shown here
public const long BillionsAndBillions = 100_000_000_000;

C# Numeric Errors Driving Me Crazy

Here is my test code for some maths I am doing:
Why is C# treating this differently?
EXCEL
Numerator = =-0.161361101510599*10000000
Denominator = =(-1*(100-81.26)) * (10000000/100)
This is most likely due to how the numbers are represented in excel vs. C#. Rounding errors / differences are common when doing artihmatic to a high degree of accuracy on different platofrms or using different software.
EDIT: It could of course be due to different numbers being fed in in the first place. Go me looking for the complex answer!
Programmers are notorious at missing the big picture and overlooking the obvious (well - I am...) Case in point!
Following a comment of you on another answer you say you are obtaining the result (0.8610517689999946638207043757m). If you round it like so:
Math.Round(0.8610517689999946638207043757m, 12);
It will output: 0,861051769000
Your problem is that you are feeding Excel different initial values to your C# code. How do you expect them to be the same?
IOW: 0.161361101510599 != 0.161361102
Not many of us who work on numeric computing trust Excel to add 2 1 digit numbers correctly. I ran your calculation in Mathematica, giving each fractional number 64 digits by extending them to the right with 0s. This is the result:
0.861051771611526147278548559231590181430096051227321237993596585
In this case go with C# rather than Excel. And, on 2nd and 3rd thoughts, in every case go with C# rather than with Excel, whose inadequacies for numeric computing are widely known and well documented.
Excel store 15 significant digits of precision. Read the article "Why does Excel Give Me Seemingly Wrong Answers?".
On another note; you should probably simplify your arithmetic a bit to reduce the number of operations. Generally, (but not always) fewer operations gives a better result in f.p. arithmetic.
val = noiseTerm / (81.26/100 - 1)
is mathematically equivalent to your equation and contains 3 operations as opposed to your 8. In particular, the scalingFactor divides out completely, so it is not necessary at all.
First of all, in most cases Excel uses double precision floating point arithmetic for basic operations just as C# does.
As for your specific case, your C# code does not match your Excel formulas. Try this C# code - which uses your Excel formulas:
static void Calc()
{
double numerator = -0.161361101510599 * 10000000.0;
double denominator = (-1.0 * (100.0 - 81.26)) * (10000000.0 / 100.0);
double result = numerator / denominator;
Console.WriteLine("result={0}", result);
}
Run this and note that the output is 0.861051768999995.
Now, format the result in Excel with the custom number format "0.00000000000000000" and you will see that Excel is giving you the same result as C#. By default, Excel uses the "General" format which rounds this number to ~12 significant digits of precision. By changing to the format above, you force Excel to show 15 digits of precision - which is the maximum number of significant digits Excel will use to display a number (internally, they have 15+ digits of precision just as the C# double type does).
You can force C# to display 15+ significant digits (instead of rounding to 15 significant digits) by running the following code:
static void Calc()
{
double numerator = -0.161361101510599 * 10000000.0;
double denominator = (-1.0 * (100.0 - 81.26)) * (10000000.0 / 100.0);
double result = numerator / denominator;
Console.WriteLine("result={0:R}", result);
}
This code will output 0.8610517689999948...but there is no way AFAIK to get Excel to display 15+ digits.

Categories

Resources