I have a column defined as decimal(10,6). When I try to save the model with the value 10.12345, in the database I saved as 10.123400. The last digit ("5") is truncated.
Why is the number default to only 4 digits in LINQ (for decimal) and how can I avoid this for all columns in my models? The solution I found was to use DbType="Decimal(10,6)", but I have a lot of columns in this situation and the change should be applied to all, and I don't see it like a good idea.
Is there a way to change this behavior without changing all the decimal columns?
Thanks
You need to use the proper DbType, decimal(10, 6) in this case.
The reason for this is simple - while .NET's decimal is actually a (decimal) floating point (the decimal point can move), MS SQL's isn't. It's a fixed "four left of decimal point, six right of decimal point". When LINQ passes the decimal to MS SQL, it has to use a specific SQL decimal - and the default simply happens to use four for the scale. You could always use a decimal big enough for whatever value you're trying to pass, but that's very impractical - for one, it will pretty much eliminate execution plan caching, because each different decimal(p, s) required will be its own separate query. If you're passing multiple decimals, this means you'll pretty much never get a cached plan; ouch.
In effect, the command doesn't send the value 10.12345 - it sends 10123450 (not entirely true, but just bear with me). Thus, when you're passing the parameter, you must know the scale - you need to send 10 as 10000000, for example. The same applies when you're not using LINQ - using SqlCommand manually has the same "issue", and you have to use a specific precision and scale.
If you're wary of modifying all those columns manually, just write a script to do it for you. But you do need to maintain the proper data types manually, there's no way around it.
Related
I have a float column in SQL Server with value 21.261 , when I am fetching this column into c# double , using entity framework core 2.0 it is becoming 21.2610000000042, how to avoid this and get the exact value as 21.261
As you may have noticed , your number is probably not exactly 21.261, but only some pretty accurate approximation.
Depending on what your data really represents you could use a different database storage type like ‘DECIMAL’ ... that would typically help when storing monetary values, that have ‘cents’.
However, depending on the programming language being capable of working with rational numbers or fractions, the moment you would read in the data back into a float, you again have to deal with these issues and likewise have to deal with it during printing etc.
I have inserted a value in a table on float colum. And When I am seeing the record in ssms, it looks like exponential value.
For example, please execute the below query in ssms and see the output in SSMS. we could see the exponential value as output.
declare #l_float float = '1234567890123456789.12345'
select #l_float
Is any way possible to see the value as the same as declared value without convert to decimal or numeric?
For .NET, is this same output will get like we get in SSMS? (when they use double datatype too?)
Thanks in advance.
Regards,
Muthu
First of all: Don't use FLOAT if you do not have a very good reason!
And bear in mind, that the value you see is not the actual value!. It is a textual representation of a binary value a human could hardly understand / interpret.
With FLOAT and SSMS there are several issues:
FLOAT is not precise. It will be rounded somehow when shown somewhere or in calculations. You can alsways enforce the output using kind of formatting, but in this case you'd quite probably have to switch to string-level
Calculations with FLOAT tend to create silly errors, where you get numbers you would not expect (e.g. 0.00000003 instead of a pure 0). In comparisons this can lead to hardly findable bugs...
SSMS will in most cases try to show a result in the best way to you. Depending on the values range this may vary from column to column...
If ever possible switch to DECIMAL(x,y)
We have reference values created from a Sequence in a database, which means that they are all integers. (It's not inconceivable - although massively unlikely - that they could change in the future to include letters, e.g. R12345.)
In our [C#] code, should these be typed as strings or integers?
Does the fact that it wouldn't make sense to perform any arithmetic on these values (e.g. adding them together) mean that they should be treated as string literals? If not, and they should be typed as integers (/longs), then what is the underlying principle/reason behind this?
I've searched for an answer to this, but not managed to find anything, either on Google or StackOverflow, so your input is very much appreciated.
There are a couple of other differences:
Leading Zeroes:
Do you need to allow for these. If you have an ID string then it would be required
Sorting:
Sort order will vary between the types:
Integer:
1
2
3
10
100
String
1
10
100
2
3
So will you have a requirement to put the sequence in order (either way around)?
The same arguments apply to your typing as applied in the DB itself too, as the requirements there are likely to be the same. Ideally as Chris says, they should be consistent.
Here are a few things to consider:
Are leading zeros important, i.e. is 010 different to 10. If so, use string.
Is the sort order important? i.e. should 200 be sorted before or after 30?
Is the speed of sorting and/or equality checking important? If so, use int.
Are you at all limited in memory or disk space? If so, ints are 4 bytes, strings at minimum 1 byte per character.
Will int provide enough unique values? A string can support potentially unlimited unique values.
Is there any sort of link in the system that isn't guaranteed reliable (networking, user input, etc)? If it's a text medium, int values are safer (all non-digit characters are erraneous), if it's binary, strings make for easier visual inspection (R13_55 is clearly an error if your ids are just alphanumeric, but is 12372?)
From the sounds of your description, these are values that currently happen to be represented by a series of digits; they are not actually numbers in themselves. This, incidentally, is just like my phone number: it is not a single number, it is a set of digits.
And, like my phone number, I would suggest storing it as a string. Leading zeros don't appear to be an issue here but considering you are treating them as strings, you may as well store them as such and give yourself the future flexibility.
They should be typed as integers and the reason is simply this: retain the same type definition wherever possible to avoid overhead or unexpected side-effects of type conversion.
There are good reasons to not use use types like int, string, long all over your code. Among other problems, this allows for stupid errors like
using a key for one table in a query pertaining another table
doing arithmetic on a key and winding up with a nonsense result
confusing an index or other integral quantity with a key
and communicates very little information: Given int id, what table does this refer to, what kind of entity does it signify? You need to encode this in parameter/variable/field/method names and the compiler won't help you with that.
Since it's likely those values will always be integers, using an integral type should be more efficient and put less load on the GC. But to prevent the aforementioned errors, you could use an (immutable, of course) struct containing a single field. It doesn't need to support anything but a constructor and a getter for the id, that's enough to solve the above problems except in the few pieces of code that need the actual value of the key (to build a query, for example).
That said, using a proper ORM also solves these problems, with less work on your side. They have their own share of downsides, but they're really not that bad.
If you don't need to perform some mathematical calculations on the sequences, you can easily choose strings.
But think about sorting: Produced orders between integers and strings will differ, e.g. 1, 2, 10 for integers and 1, 10, 2 for strings.
I made a query to SQL Server to get some data via a Stored Procedure, the returned value was this:
10219150
Then, in an assembly (I don't have the source code of that assembly, I reflected the file to view the code) someone had written this:
Amount = Convert.ToSingle(10219150); //the value from the stored procedure
So, when I invoke that method which does the final conversion, it returns this value:
1.021315E+7
How is that possible? Why does the Convert.ToSingle add extra decimal positions? I don't understand.
Is there a way that i can reverse that conversion on my code when I invoke that method of the assembly? I can't rewrite that assembly file as it's too big, and, as I mentioned earlier, I don't have the source code to fix the conversion.
From this: 1.021315E+7 To this: 10219150 again (restore the correct value without that conversion)
Hope I made myself clear.
Thanks in advance.
The conversion to single isn't adding extra precision.
10219150 is 1.021315E+7 (which is just another way of writing 1.021315 * 107).
The method you are using to print out the value is just using scientific notation to display the number.
If you are printing the number then you need to set the formatting options.
float amount = Convert.ToSingle("10219150");
string toPrint = string.Format("{0:N}", amount);
Will print the number as:
"10,219,150.00"
To get no decimal places use "{0:N0}" as the format string.
You have two issues. One is easily solved, and the other may be more difficult or impossible.
As ChrisF stated, 1.021315E+7 is simply another way of writing 10219150. (The E+7 part in Scientific Notation means to shift the decimal point 7 places to the right.) When you format your single precision value, you can use
fvalue.ToString("f0");
to display as an integer, rather than in Scientific Notation.
The bigger problem, unfortunately, is that a single precision float can only hold 7 significant digits, and in your example you are storing 8. Therefore, the last digit may be rounded. (Since it happens to be 0 in your case, the rounding might not have been noticed.)
If that loss of precision is critical, you would likely need to fetch the value from the database as a long, or as a double-precision value (depending on the type of data returned.) Each of these types can hold more significant digits.
When the value is converted to Single, it's rounded as it contains more significant digits that can fit in a Single. If you convert 10213153 to Single you also end up with 1.021315E+7 i.e. 10213150.
As the code uses a Single to store the amount, there is nothing that you can do to make it handle the current value correctly. The amount simply can not be represented correctly as a Single.
You either have to use lower values, or change the code.
I need to store a couple of money related fields in the database but I'm not sure which data type to use between money and decimal.
Decimal and money ought to be pretty reliable. What i can assure you (from painful personal experience from inherited applications) is DO NOT use float!
I always use Decimal; never used MONEY before.
Recently, I found an article regarding decimal versus money data type in Sql server that you might find interesting:
Money vs Decimal
It also seems that the money datatype does not always result in accurate results when you perform calculations with it : click
What I've done as wel in the past, is using an INT field, and store the amount in cents (eurocent / dollarcent).
I guess it comes down to precision and scale. IIRC, money is 4dp. If that is fine, money expresses your intent. If you want more control, use decimal with a specific precision and scale.
It depends on your application!!!
I work in financial services where we normally consider price to be significant to 5 decimal places after the point, which of course when you buy a couple of million at 3.12345pence/cents is a significant amount.
Some applications will supply their own sql type to handle this.
On the other hand, this might not be necessary.
<Humour>
Contractor rates always seemed to be rounded to the nearest £100, but currently seem to be to nearest £25 pounds in the current credit crunch.
</Humour>
Don't align your thoughts based on available datatypes. Rather, analyze your requirement and then see which datatype fits best.
Float is anytime the worst choice considering the limitation of the architecture in storing binary version of floating point numbers.
Money is a standard unit and will surely have more support for handling money related operations.
In case of decimal, you'll have to handle each and everything but you know it's only you who is handling a decimal type, thus no surprises which you may get with other two data types.
Use decimal and use more decimal places than you think you will need so that caclulations will be correct. Money does not alwys return correct results in calculations. Under no circumstances use float or real as these are inexact datatypes and can cause calculations to be wrong (especially as they get more complex).
For some data (like money) where you want no approximation or changes due to float value, you must be sure that the data is never 'floating', it must be rigid on both sides of decimal point.
One easy way to be safe is, to value by converting it into INTEGER data type, and be sure that while you retrive the value, decimal point is placed at proper location.
e.g.
1. To save $240.10 into database.
2. Convert it to a pure integral form: 24010 (you know its just the shift of decimal).
3. Revert it back to proper decimal state. Place decimal at 2 positions from right. $240.10
So, while being in databse it will be in a rigid integer form.