I have a MSSQL table where I have records which contains amount of cryptocurrencies in their smallest units (BTC-Satoshi, ETH-Wei...) This amounts can be pretty big (1 ETH is 1,000,000,000,000,000,000 Wei) so I wanted to store them in a NUMERIC(38,0) field type.
Then I have my client application where I am using Entity Framework Core (6), and in my entity class I have amount declared as a BigInteger. The problem is that I don't know how to do the mapping. I can use ValueConverter<BigInteger, decimal> but I (of course) get an error (overflow) when the number in the database is larger than the range of the Decimal type.
Are there any other options or what are the best practices? The other problem is that I would also want to use aggregate functions (SUM) on this amounts, but I'm not sure if this is possible using BigInteger and EF Core). I would also probably have problems with plain ADO and usage of SqlDataReader.GetDecimal (since I will get the same overflow error). Any advices?
I (of course) get an error (overflow) when the number in the database is larger than the range of the Decimal type.
If you need numbers larger than a .NET Decimal, you'll have to convert them to strings in SQL Server to read them on the client.
David
Related
I'm using EF database first. In my database I have a field which I know will always be 10 digits long, so naturally I've opted for a decimal(10,0) type, when I insert values into the table I can insert any number up to 10 digits long, however when I insert an entity with EF6 it adds a 0 decimal and then throws a parameter out of range value. The type of the field in my C# code is decimal
Here is the entity immediately before calling context.SaveChanges():
And a sanity check, here is the column in sql server:
EDIT:
Here is the EF mapping:
Just reported it on codeplex.
The error sounds like a bug in EF, but if it is always an integer, wouldn't it make more sense to use an int instead of decimal? I am thinking both for logical and performance sake.
This is actually a bug in SqlClient - SqlDecimal code doesn't handle the precision correctly for some edge cases.
I believe the culprit is here:
http://referencesource.microsoft.com/#System.Data/data/System/Data/SQLTypes/SQLDecimal.cs#416
However, given that this has been the behavior for a long time, it is very unlikely it will ever be fixed. I would recommend working around this by using bigint, like user1648371 and Andrew Morton have suggested.
I have looked though MANY MANY posts on SO and come up with any number of different answers - none of which seem to quite work, or contradict each other based on versions of code etc.
I would prefer to steer clear of the "AsEnumerable()" fix because as I understand it this evaluates all results BEFORE the query is applied...I would like to run the query first so the data in the result is as small as possible.
For info: The tables I am querying can contain +2 million rows.
My requirement:
A "Contains" function on an Integer column of SQL Server (Compact or Standard) through Entity Framework. This would allow a user to enter a number to search on, without having the full number available. In conjunction with other predicates, this becomes very powerful in reducing the amount of data returned.
e.g.
f=>f.Id.ToString().Contains("202")<br/>
This currently fails because "ToString()" cannot be converted to an Entity Store command.
or as a T-SQL equivalent
cast(Id as varchar(9)) LIKE '%202%'
Versions:
EF5
.Net 4.0
SQL Server 2008 Standard OR SQL Compact
You can use SqlFunctions.StringConvert
f=> SqlFunctions.StringConvert((double) f.Id).Contains("202")
There is no overload for int so you have to cast it to either double or decimal
First you can try use this function: SqlFunctions.StringConvert
f=>SqlFunctions.StringConvert((double)f.id)).Contains("202")
There is no overload for int so you must typecast to double. This function is translated to a corresponding function in the database.
Another solution create an stored procedure and call it from EF:
objectContext.ExecuteSqlCommand("storedProcedureName", SqlParameters)
or
objectContext.ExecuteStoreQuery<ResultType>("storedProcedureName", SqlParameters)
I have about a dozen fields marked as tinyint, which translates to a byte field. I'm finding that I'm having to do a lot of casting to int in my code to interact with integers.
Because of this, I'm thinking of changing the entity framework to read these as integers instead of bytes. Are there any implications of this, besides the chance that I may pass in an integer that is out of range of the tinyint? Am I just adding additional casts where I may not need them?
(I'm also thinking of just using integer instead in the database, because this isn't going to be a high usage DB.)
Edit
From Zach's comment below / Entity Framework Mapping SQL Server tinyint to Int16 -- it looks like I can't just change the property from byte to int? The EF will throw an error? So, is there even a way to do what I'm thinking?
There shouldn't really be any issue using when you use Integers as the compiler actually optimizes the integers (I'm not mistaken.)
The best bet would be to change your database values from byte to Int. IMHO.
I am trying to read Oracle Spatial data with C# using ODP.NET.
In some cases, my Oracle Spatial data has Number values in the SDO_GEOMETRY’s OrdinateArray that are too big for .NET to handle. So, when I try to read the SDO_GEOMETRY values, it throws a “System.OverflowException: Arithmetic operation resulted in an overflow”. In my case, the ordinate values just have too many digits after the decimal point, and I don’t care about losing this information.
My code is based on the sample app here: http://www.orafaq.com/forum/mv/msg/27794/296419/0/#msg_296419
I see there are SafeMapping approaches with DataSets to read Number types that won’t fit into Decimal types, but I don’t see how to apply this to an internal part of the SDO_GEOMETRY type.
Is there a way around this problem?
What is the Oracle Datatype of "OrdinateArray"? If it is a user defined type (eg VARRAY), you can create a custom .NET class to accept the data. For more information on this, read up on "User Defined Types"
It's probably too late for you but maybe someone could use my solution to the problem.
I run into this problem while making custom shapefile to Oracle Locator importer in C#.
What I did is I changed ordinatesArray variable type from decimal[] to double[] in SdoGeometry class. Same change (decimal to double)was needed for
public class OrdinatesArrayFactory : OracleArrayTypeFactoryBase {}
and
OrdinatesArray = GetValue((int)OracleObjectColumns.SDO_ORDINATES) in MapToCustomObject method.
In fact the code was working OK with decimal type when I imported data using oracle.spatial.util.SampleShapefileToJGeomFeature tool.
Problems started when I imported data using my tool (shapefile geometry to WKB and then insert to Oracle using
INSERT INTO some_table (GEOM) VALUES (SDO_UTIL.FROM_WKBGEOMETRY())
For some reason ordinates where too big for decimal although I handled precision.
I'm using System.Data.SQLite to save data to database. In table there are 2 columns defined as float. However when I save data to DB it saves only integer part.
12.2345 => 12
11.5324 => 11
(I don't know how exactly it rounds, it just rounds)
Yes I'm sure I'm using floats in my application, and I'm sure that CommandText contains float numbers NOT integer.
You need to be using floats in your application, but you also need to make sure the table entry in the SQLite database has been explicitly declared to be that REAL type.
There is no support for floats as build-in type. Use TEXT instead.