I have taken on the task of converting a very old classic ASP/VBScript/Access web site to a "current" site. As you would suspect, most of it has been easier to just do-over, but there is a reporting page which relies on a bunch of calculations that aren't documented so I just have to use his code/formulas but move them to C#.
Just for full disclosure, I imported the Access database to SQL Server with no issues. I used EntityFramewok PowerTools (Code First) to reverse engineer the new database and create my .edmx file. I noticed that when EntityFramework created the .edmx for me, it defined fields that are floats in the database as nullable Doubles (Double?) in the C# class...is there any risk of transposing, chopping, or changing the value in any way when converting float to nullable double?
The other spot that I think is suspect is in the rounding. Obviously the math around rounding hasn't changed, but I wonder if VBScript and C# have any known differences in how they round.
So these lines in VBScript
varA1=oDataRs("boundData")
varB1i=varA1*3.0689
varB1=round(varB1i,3)
Becomes this is C#
var newVal = Math.Round(boundData.Value * 3.0689,3);
I am not converting anything to double or decimal, just working with it as it comes out of the database. I do have to use .Value though as I otherwise can't multiply a Double? and a Double.
The values are very close, but off by about .036 on a pretty important piece of data, and I can't find any reason why.
In fact, both .NET and VB use the Banker's Rounding algorithm, so you should get consistent results as long as you don't specify MidpointRounding.AwayFromZero.
The really interesting question is whether the source data in the Access database is a floating-point number, or a Currency column. If it's Currency, then you should be using a decimal, not a double. That could cause small differences if the magnitudes of the numbers being multipled are sufficiently different.
For the particularly important datum, what is the value of boundData?
Related
I have a float column in SQL Server with value 21.261 , when I am fetching this column into c# double , using entity framework core 2.0 it is becoming 21.2610000000042, how to avoid this and get the exact value as 21.261
As you may have noticed , your number is probably not exactly 21.261, but only some pretty accurate approximation.
Depending on what your data really represents you could use a different database storage type like ‘DECIMAL’ ... that would typically help when storing monetary values, that have ‘cents’.
However, depending on the programming language being capable of working with rational numbers or fractions, the moment you would read in the data back into a float, you again have to deal with these issues and likewise have to deal with it during printing etc.
I have a column defined as decimal(10,6). When I try to save the model with the value 10.12345, in the database I saved as 10.123400. The last digit ("5") is truncated.
Why is the number default to only 4 digits in LINQ (for decimal) and how can I avoid this for all columns in my models? The solution I found was to use DbType="Decimal(10,6)", but I have a lot of columns in this situation and the change should be applied to all, and I don't see it like a good idea.
Is there a way to change this behavior without changing all the decimal columns?
Thanks
You need to use the proper DbType, decimal(10, 6) in this case.
The reason for this is simple - while .NET's decimal is actually a (decimal) floating point (the decimal point can move), MS SQL's isn't. It's a fixed "four left of decimal point, six right of decimal point". When LINQ passes the decimal to MS SQL, it has to use a specific SQL decimal - and the default simply happens to use four for the scale. You could always use a decimal big enough for whatever value you're trying to pass, but that's very impractical - for one, it will pretty much eliminate execution plan caching, because each different decimal(p, s) required will be its own separate query. If you're passing multiple decimals, this means you'll pretty much never get a cached plan; ouch.
In effect, the command doesn't send the value 10.12345 - it sends 10123450 (not entirely true, but just bear with me). Thus, when you're passing the parameter, you must know the scale - you need to send 10 as 10000000, for example. The same applies when you're not using LINQ - using SqlCommand manually has the same "issue", and you have to use a specific precision and scale.
If you're wary of modifying all those columns manually, just write a script to do it for you. But you do need to maintain the proper data types manually, there's no way around it.
I'm trying to store metric data (meters, kilometers, square-meters) in SQL Server 2012.
What is the best datatype to use? float (C#: double), decimal (C#: decimal) or even geometry? Or something different?
Either a decimal with an appropriate amount of precision for your data, or an int type, if appropriate
It completely depends on the application and what precision you need for it.
If we are talking about architecture then then precision needs are relatively limited and a C# 32-bit float will take you a long way. In SQL this translates to float(24), also referred to as the database type real. This SQL DB type requires 4 bytes of storage per entry.
If we instead want to address points on the surface of the earth you need a lot higher precision. Here a C# double is suitable, which corresponds to a SQL float(53) or just float. This SQL DB type requires 8 bytes of storage and should be used only if needed or if the application is small and disk/memory usage is not a concern.
The SQL Decimal is could be a good alternative for the actual SQL DB, but has 2 drawbacks:
It corresponds to a C# Decimal which is a type designed for financial usage and to prevent round-off errors. This design renders the C# Decimal type slower than a float/double when used in trigonometric methods etc. You could of course cast this back and forth in your code, but that is not the most straight-forward approach IMO.
"The Decimal value type is appropriate for financial calculations requiring large numbers of significant integral and fractional digits and no round-off errors." - MSDN : Decimal Structure
The SQL DB type Decimal requires 5-9 bytes of storage per entry (depending on the precision used), which is larger than the float(x) alternatives.
So, use it according to your needs. In your comment you state that its about real estate, so I'd go for float(24) (aka real) which is exactly 4 bytes and directly translates to a C# float. See: float and real (Transact-SQL)
Lastly, here is a helpful resource for converting different types between .Net and SQL: SqlDbType Enumeration
Depends what you want to do
float or double are non-exact datatypes (so 5.0 == 5.0 may be false due to rounding issues)
Decimal is an exact datatype (so 5.0 == 5.0 will always be true)
and Geometry/Geography (easy said) are for locations on a map.
Float calculation should be fastes among the three, since geography is binary data with some infomation about projection (ist all about maps here) and decimal technically not as easy to handle as float.
I noticed that when I store a double value such as e.g. x = 0.56657011973046234 in an sqlite database, and then retrieve it later, I get y = 0.56657011973046201. According to the sqlite spec and the .NET spec (neither of which I originally bothered to read :) this is expected and normal.
My problem is that while high precision is not important, my app deals with users inputting/selecting doubles that represent basic 3D info, and then running simulations on them to find a result. And this input can be saved to an sqlite database to be reloaded and re-run later.
The confusion occurs because a freshly created series of inputs will obviously simulate in slightly different way to those same inputs once stored and reloaded (as the double values have changed). This is logical, but not desireable.
I haven't quite come to terms of how to deal with this, but in the meantime I'd like to limit/clamp the user inputs to values which can be exactly stored in an sqlite database. So if a user inputs 0.56657011973046234, it is actually transformed into 0.56657011973046201.
However I haven't been able to figure out, given a number, what value would be stored in the database, short of actually storing and retrieving it from the database, which seems clunky. Is there an established way of doing this?
The answer may be to store the double values as 17 significant digit strings. Look at the difference between how SQLite handles real numbers vs. text (I'll illustrate with the command line interface, for simplicity):
sqlite> create table t1(dr real, dt varchar(25));
sqlite> insert into t1 values(0.56657011973046234,'0.56657011973046234');
sqlite> select * from t1;
0.566570119730462|0.56657011973046234
Storing it with real affinity is the cause of your problem -- SQLite only gives you back a 15 digit approximation. If instead you store it as text, you can retrieve the original string with your C# program and convert it back to the original double.
Double round has an implementation with a parameter that specifies the number of digits. Use this to round to 14 digits (say) with: rval = Math.Round(Val, 14)
Then round when receiving the value from the database, and at the beginning of simulations, ie. So at the values match?
For details:
http://msdn.microsoft.com/en-us/library/75ks3aby.aspx
Another thought if you are not comparing values in the database, just storing them : Why not simply store them as binary data? Then all the bits would be stored and recovered verbatim?
Assuming that both SQL Lite and .NET correctly implement the IEEE specification, you should be able to get the same numeric results if you used the same floating point type on both of the sides (because the value shouldn't be altered when passed from database to C# and vice versa).
Currently you're using 8-byte IEEE floating point (single) (*) in SQL Lite and 16-byte floating-point in C# (double). The float type in C# corresponds to the 8-byte IEEE standard, so using this type instead of double could solve the problem.
(*) The SQL Lite documentation says that REAL is a floating point value, stored as an 8-byte IEEE floating point number.
You can use a string to store the # in the db. Personally I've done what winwaed suggested of rounding before storing and after fetching from the db (which used numeric()).
I recall being burned by bankers rounding but it could just be that didn't meet spec.
You can store the double as a string, and by using the round-trip formatting when converting the double to a string, it's guaranteed to generate the same value when parsed:
string formatted = theDouble.ToString("R", CultureInfo.Invariant);
If you want the decimal input values to round-trip, then you'll have to limit them to 15 significant digits. If you want the SQLite internal double-precision floating-point values to round-trip, then you might be out of luck; that requires printing to a minimum of 17 significant digits, but from what I can tell, SQLite prints them to a maximum of 15 (EDIT: maybe an SQLite expert can confirm this? I just read the source code and traced it -- I was correct, the precision is limited to 15 digits.)
I tested your example in the SQLite command interface on Windows. I inserted 0.56657011973046234, and select returned 0.566570119730462. In C, when I assigned 0.566570119730462 to a double and printed it to 17 digits, I got 0.56657011973046201; that's the same value you get from C#. 0.56657011973046234 and 0.56657011973046201 map to different floating-point numbers, so in other words, the SQLite double does not round-trip.
I need to store a couple of money related fields in the database but I'm not sure which data type to use between money and decimal.
Decimal and money ought to be pretty reliable. What i can assure you (from painful personal experience from inherited applications) is DO NOT use float!
I always use Decimal; never used MONEY before.
Recently, I found an article regarding decimal versus money data type in Sql server that you might find interesting:
Money vs Decimal
It also seems that the money datatype does not always result in accurate results when you perform calculations with it : click
What I've done as wel in the past, is using an INT field, and store the amount in cents (eurocent / dollarcent).
I guess it comes down to precision and scale. IIRC, money is 4dp. If that is fine, money expresses your intent. If you want more control, use decimal with a specific precision and scale.
It depends on your application!!!
I work in financial services where we normally consider price to be significant to 5 decimal places after the point, which of course when you buy a couple of million at 3.12345pence/cents is a significant amount.
Some applications will supply their own sql type to handle this.
On the other hand, this might not be necessary.
<Humour>
Contractor rates always seemed to be rounded to the nearest £100, but currently seem to be to nearest £25 pounds in the current credit crunch.
</Humour>
Don't align your thoughts based on available datatypes. Rather, analyze your requirement and then see which datatype fits best.
Float is anytime the worst choice considering the limitation of the architecture in storing binary version of floating point numbers.
Money is a standard unit and will surely have more support for handling money related operations.
In case of decimal, you'll have to handle each and everything but you know it's only you who is handling a decimal type, thus no surprises which you may get with other two data types.
Use decimal and use more decimal places than you think you will need so that caclulations will be correct. Money does not alwys return correct results in calculations. Under no circumstances use float or real as these are inexact datatypes and can cause calculations to be wrong (especially as they get more complex).
For some data (like money) where you want no approximation or changes due to float value, you must be sure that the data is never 'floating', it must be rigid on both sides of decimal point.
One easy way to be safe is, to value by converting it into INTEGER data type, and be sure that while you retrive the value, decimal point is placed at proper location.
e.g.
1. To save $240.10 into database.
2. Convert it to a pure integral form: 24010 (you know its just the shift of decimal).
3. Revert it back to proper decimal state. Place decimal at 2 positions from right. $240.10
So, while being in databse it will be in a rigid integer form.