I am making a iOS app in xamarin. I am working with decimal values & everything works great on the iPhone. But when I test the app on a iPad, 1 (not all) decimal value get's misinterpreted.
For exemple there is a string value : 1200.00.
I am Parsing this value :
if (Decimal.TryParse (vehicle.offer_price, out result)) {
but I am getting back : 120000 on the iPad & 1200.00 on the iPhone?
What's up with this?
Kind regards
The region settings on each device is different.
One device is set to a region format where the dot is a thousand separator, the other where it's a decimal separator.
I think you may be confusing the internal implementation of Decimal with how it is presented. Floating point types (decimal, double etc) are implemented as binary data structures, which can be presented to us in an number of different formats (depending on where they are being displayed), which may or may not include the significant digits after the decimal point. Normally in the UI it is up to you the programmer to decide what display format to use (e.g., how many digits after the decimal point).
Compare this to the debugger - here the display format has been chosen by the implementors of the IDE, who may or may not include digits after the decimal point (but it would be a bit odd if they did not display any).
The important thing is, whatever the display format, this does not change the actual underlying value in the Decimal.
Related
For representing money I know it's best to use C#'s decimal data type to double or float, however if you are working with amounts of money less than millions wouldn't int be a better option?
Does decimal have perfect calculation precision, i.e. it doesn't suffer from Why computers are bad at numbers? If there is still some possibility of making a calculation mistake, wouldn't it be better to use int and just display the value with a decimal separator?
The amounts you are working with, "less than millions" in your example, isn't the issue. It's what you want to do with the values and how much precision you need for that. And treating those numbers as integers doesn't really help - that just puts the onus on you to keep track of the precision. If precision is critical then there are libraries to help; BigInteger and BigDecimal packages are available in a variety of languages, and there are libraries that place no limit on the precision Wikipedia has a list The important takeaway is to know your problem space and how much precision you need. For many things the built in precision is fine, when it's not you have other options.
Like li223 said, integer won't allow you to save values with decimal cases - and the majority of currencies allow decimal values.
I would advise to pick a number of decimal cases to work with, and avoid the problem that you refered ("Why computers are bad at numbers"). I work with invocing and we use 8 decimal cases, and works fine with all currencies so far.
The main reason to use decimal over integer in this case is that decimal, well, allows decimal places IE: £2.50 can be properly represented as 2.5 with a decimal. Whereas if you used an integer you can't represent decimal points. This is fine if, like John mentions in their comment, you're representing a currency like Japanese Yen that don't have decimals.
As for decimal's accuracy, it still suffers from "Why are computers bad at numbers" see the answer to this question for more info.
Is it possible to set precision of double values in c# for all double values included in the project? I have a lot of values there and changing them with Math.Round would be exhausting. I need to have the double value as 5.12345 instead of 5.123455123321321 for example.
This is a fundamental limitation of floating point types.. double is actually store internally as a sign exponent and mantissa, the exponent is base 2, so has a lot of trouble dealing with base 10..
The easiest solution is to use a base 10 64bit floating point type, namely decimal. Its still floating point, it still only limited precision but it is a lot friendly and more accurate to work with in a lot of cases
Update
If all you want to do is change the display output, you can either use rounding (which you know), or the appropriate format specifiers with string.format ToString or string interpolation
Example
var number = 5.123455123321321;
Console.WriteLine(number.ToString("F3",
CultureInfo.InvariantCulture));
Console.WriteLine($"{number:F3}");
// Displays 5.123
// Displays 5.123
priceTotal.Text = (float.Parse(priceLiter.Text) * float.Parse(litres.Text)).ToString();
This somehow works fine on windows phone emulator, however, on the phone, it completely ignores the decimal points and multiplies as if the numbers were integers.
On emulator the priceLiter is initially parsed from a number (1.442) and converted to string so it can be put in a TextBox. On emulator, it converts it to
1.442
on the phone it converts it to
1,442
(notice the different decimal point)
However, the InputScope="Number" only displays the decimal point, and not a comma
Because of this, the priceTotal is calculated correctly on the emulator, but on the phone it ignores the decimal point . and treats it as thousands separator (I guess?), to state the obvious, the priceTotal is way off.
After some research, ss I expected, this depends on the Regional Format, and numeric keypad doesn't seem to be localized.
How can I approach this? Do I replace the entered decimal point with localized decimal point while text is still being input, if that is even possible?
Should I automatically replace all commas with dots before parsing numbers?
I could change InputScope to normal, but that wouldn't really change anything because half of people would enter the number using dot and other half using comma.
Thanks!
You could either force the current culture to US:
CultureInfo.CurrentCulture = CultureInfo.GetCultureInfo("en-US");
Or you could give it as a parameter to the parsing method:
float.Parse(a, CultureInfo.GetCultureInfo("en-US"));
Another way to think about it is that you should parse it as you receive it. When a user has set his phone to dutch he would expect to use the , as the decimal mark.
This might be due to your phones culture.
Just write these 2 lines of code on your app.xaml.cs constructor
Thread.CurrentThread.CurrentCulture = (CultureInfo)Thread.CurrentThread.CurrentCulture.Clone();
Thread.CurrentThread.CurrentCulture.NumberFormat.NumberDecimalSeparator = ".";
The problem would never occur again.
I am getting data into a text field and I need to display it as a percentage. Is there a function to perform this?
Ex: in my column I have "0.5", "0.1","0.2","0.25" etc., which needs to be displayed as
50%,10%,20%,25% etc., What is the best way to do it?
You should do this in two phases:
Parse the text as a number so you've got the value as your "real" type. (As a general rule, parse from text as early as you can, and format to a string as late as you can... operations between the two will be a lot simpler using the natural type.)
Format the number as a percentage using the standard numeric format string for percentage
So:
decimal percentage = decimal.Parse(input);
string output = percentage.ToString("p0");
Notes:
You should consider both input and output culture; are you always expecting to use "." as the decimal separator, for example?
Use decimal rather than double to exactly represent the value in the text (for example, the text could have "0.1" but double can't hold a value of exactly 0.1)
You can add things like desired precision to the formatting; see the linked docs for details; the example gives just an integer percentage, for example
Easiest would be to parse it (must be a double) then convert it back to a string, formatting it as a percentage.
var percentageString = double.Parse(doubleString).ToString("p1");
Now, some of you hoity-toity types may say that decimal is the correct type to use in this case.
Well, yes, if you need an additional 12-13 digits of precision.
However, most of us real folk (and I'm all about keeping it real) are fine with double's 15-16 digits of precision.
The real choice is whether or not your code is using doubles or decimals in the first place. If you are using doubles in your code, just stick with doubles. If decimals, stick to decimals. What you definitely do want to avoid is having to convert between the two any more than is absolutely necessary, as there be dragons. And unexpected runtime bugs that can corrupt your data. But mostly dragons.
I have a rather strange problem. I have a very simple application that reads some data from a csv formatted file, and draws a polar 'butterfly' to a form. However a few people in european countries get a very wierd looking curve instead, and when I modified the program to output some sample values to try and workout what is going on, it only gave me more questions!
Here is a sample of expected values, and what one particular user gets instead:
EXPECTED -> SEEN
0.00 0.00 -> 0,00 0,00
5.00 1.35 -> 5,00 1346431626488,41
10.00 2.69 -> 10,00 2690532522738,65
So all the values on the right (which are computed in my program) are multiplied by a factor of 10^12!! How on earth can that happen in the CLR? the first numbers - 0, 5, 10 - are just produced by the simple loop that writes the output, using: value += 5.
The code producing these computations does make use of interpolation using the alglib.net library, but the problem does also occur with 2 other values that are extracted from xml returned from a http get, and then converted from radians to degrees.
Also not exactly a problem, but why would decimal values print with commas instead of decimal points? The output code is a simple string.Format("{0:F}", value) where value is a double?
So why on earth would some values be shifted by 12 decimal places, but not others, and only in some countries? Yes others have run the app with no problems... Not sure if there is any relevance but this output came from Netherlands.
Different cultures use different thousands and decimal separators. en-US (US English) uses "," and "." but de-DE (German German) uses "." and ",". This means that when reading from or writing to strings you need to use the proper culture. When persisting information for later retrieval that generally means CultureInfo.InvariantCulture. When displaying information to the user that generally means CultureInfo.CurrentCulture.
You haven't provided the code that reads from the CSV file, but I imagine you're doing something like double.Parse(field) for each field. If the field has the value "5.0" and you parse it when the current culture is de-DE "." will be considered a thousands separator and the value gets read as 50.0 in en-US terms. What you should be doing is double.Parse(field, CultureInfo.InvariantCulture).
All of the Parse, TryParse, Format, and many ToString methods accept an IFormatProvider. Get in the habit of always providing the appropriate format provider and you wont get bitten by internationalization issues.
My personal guess would be that you have a string -> Number conversion somewhere that is not culture aware at all.
Why oh simple run this code :
var nl = System.Globalization.CultureInfo.GetCultureInfo("nl-NL");
var numberString = "1.000000000000000";
Console.WriteLine(float.Parse(numberString, nl));
The result is 1E+15 now you just have to find the places where you need to provide the CultureInfo.InvariantCulture (Simplified english, equivalent to the "C" culture in C) to Parse along with the string.
In some languages a decimal comma is used instead of the decimal point. This depends on the culture. You can force your own culture if it's important to you that only points are used.
One interesting thing of note is that if 1346431626488 were divided by 1,000,000,000,000, then you would get 1.35 rounded to two decimal places. And if 2.69 were divided by 1,000,000,000,000 then you would get 2.69 rounded to two decimal places. Just an observation.