I have about a dozen fields marked as tinyint, which translates to a byte field. I'm finding that I'm having to do a lot of casting to int in my code to interact with integers.
Because of this, I'm thinking of changing the entity framework to read these as integers instead of bytes. Are there any implications of this, besides the chance that I may pass in an integer that is out of range of the tinyint? Am I just adding additional casts where I may not need them?
(I'm also thinking of just using integer instead in the database, because this isn't going to be a high usage DB.)
Edit
From Zach's comment below / Entity Framework Mapping SQL Server tinyint to Int16 -- it looks like I can't just change the property from byte to int? The EF will throw an error? So, is there even a way to do what I'm thinking?
There shouldn't really be any issue using when you use Integers as the compiler actually optimizes the integers (I'm not mistaken.)
The best bet would be to change your database values from byte to Int. IMHO.
Related
I have a MSSQL table where I have records which contains amount of cryptocurrencies in their smallest units (BTC-Satoshi, ETH-Wei...) This amounts can be pretty big (1 ETH is 1,000,000,000,000,000,000 Wei) so I wanted to store them in a NUMERIC(38,0) field type.
Then I have my client application where I am using Entity Framework Core (6), and in my entity class I have amount declared as a BigInteger. The problem is that I don't know how to do the mapping. I can use ValueConverter<BigInteger, decimal> but I (of course) get an error (overflow) when the number in the database is larger than the range of the Decimal type.
Are there any other options or what are the best practices? The other problem is that I would also want to use aggregate functions (SUM) on this amounts, but I'm not sure if this is possible using BigInteger and EF Core). I would also probably have problems with plain ADO and usage of SqlDataReader.GetDecimal (since I will get the same overflow error). Any advices?
I (of course) get an error (overflow) when the number in the database is larger than the range of the Decimal type.
If you need numbers larger than a .NET Decimal, you'll have to convert them to strings in SQL Server to read them on the client.
David
I have a table that has Constant Value...Is it better that I have this table in my Database(that is SQL)or have an Enum in my code and delete my table?
my table has only 2 Columns and maximum 20 rows that these rows are fixed and get filled once,first time that i run application.
I would suggest to create an Enum for your case. Since the values are fixed(and I am assuming that the table is not going to change very often) you can use Enum. Creating a table in database will require an unnecessary hit to the database and will require a database connection which could be skipped if you are using Enum.
Also a lot may depend on how much operation you are going to do with your values. For example: its tedious to query your Enum values to get distinct values from your table. Whereas if you will use table approach then it would be a simple select distinct. So you may have to look into your need and the operations which you will perform on these values.
As far as the performance is concerned you can look at: Enum Fields VS Varchar VS Int + Joined table: What is Faster?
As you can see, ENUM and VARCHAR results are almost the same, but join
query performance is 30% lower. Also note the times themselves –
traversing about same amount of rows full table scan performs about 25
times better than accessing rows via index (for the case when data
fits in memory!)
So, if you have an application and you need to have some table field
with a small set of possible values, I’d still suggest you to use
ENUM, but now we can see that performance hit may not be as large as
you expect. Though again a lot depends on your data and queries.
That depends on your needs.
You may want to translate the Enum Values (if you are showing it in GUI) and order a set of record based on translated values. For example: imagine you have a Employees table and a Position column. If the record set is big, and you want to sort or order by translated position column, then you have to keep the enum values + translations in database.
Otherwise KISS and have it in code. You will spare time on asking database for values.
I depends on character of that constants.
If they are some low level system constants that never should be change (like pi=3.1415) then it is better to keep them only in code part in some config file. And also if performance is critical parameter and you use them very often (on almost each request) it is better to keep them in code part.
If they are some constants (may be business constants) that can change in future it is Ok to put them in table - then you have more flexibility to change them (for instance from admin panel).
It really depends on what you actually need.
With Enum
It is faster to access
Bound to that certain application. (although you can share by making it as reference, but it just does not look as good as using DB)
You can use in switch statement
Enum usually does not care about value and it is limited to int.
With DB
It is slower, because you have to make connection and query.
The data can be shared widely.
You can set the value to be anything (any type any value).
So, if you will use it only on certain application, Enum is good enough. But if several applications are going to use it, then DB would be better option.
I'm building a C# project configuration system that will store configuration values in a SQL Server db.
I was originally going to set the table up as such:
KeyId int
FieldName varchar
DataType varchar
StringValue varchar
IntValue int
DecimalValue decimal
...
Values would be stored and retrieved with the value in the DataType column determining which Value column to use, but I really don't like that design. So I thought I'd go this route:
KeyId int
FieldName varchar
DataType varchar
Value varbinary
Here the value in DataType would still determine the type of Value brought back, but it would all be in one column and I wouldn't have to write a ton of overloads to accommodate the different types like I would have with the previous solution. I would just pull the Value in as a byte array and use DataType to perform whatever conversion(s) necessary to get my Value.
Is the varbinary approach going to cause any performance issues or is it just bad practice to drop all these different types of data into a varbinary? I've been searching around for about an hour and I can't get to a definitive answer.
Also, if there is a more preferred method anyone can think of to reach the same conclusion, I'm all ears (or eyes).
You could serialize your settings as JSON and just store that as a string. Then you have all the settings within one row and your clients can deserialize as needed. This is also a safe way to add additional settings at any time without any modifications to your database.
We are using the second solution and it works well. Remember, that the disk access is in orders of magnitude greater, than the ex. casting operation (it's milliseconds vs. nanoseconds, see ref), so do not look for bottleneck here.
The solution can be to implement polymorphic association (1, 2). But I dont think there is a need for that, or that you should do this. The second solution is close to non-Sql db - you can dump as a value anything, might be as well entire html markup for a page. It should be the caller responsability to know what to do wit the data.
Also, see threads on how to store settings in DB: 1, 2 and 3 for critique.
I'm using EF database first. In my database I have a field which I know will always be 10 digits long, so naturally I've opted for a decimal(10,0) type, when I insert values into the table I can insert any number up to 10 digits long, however when I insert an entity with EF6 it adds a 0 decimal and then throws a parameter out of range value. The type of the field in my C# code is decimal
Here is the entity immediately before calling context.SaveChanges():
And a sanity check, here is the column in sql server:
EDIT:
Here is the EF mapping:
Just reported it on codeplex.
The error sounds like a bug in EF, but if it is always an integer, wouldn't it make more sense to use an int instead of decimal? I am thinking both for logical and performance sake.
This is actually a bug in SqlClient - SqlDecimal code doesn't handle the precision correctly for some edge cases.
I believe the culprit is here:
http://referencesource.microsoft.com/#System.Data/data/System/Data/SQLTypes/SQLDecimal.cs#416
However, given that this has been the behavior for a long time, it is very unlikely it will ever be fixed. I would recommend working around this by using bigint, like user1648371 and Andrew Morton have suggested.
The scenario is that I want to encrypt finance numbers in a column with a data type of int in a sql server table.
It is a big app so it is difficult to change the table column data type from int to any other data type.
I'm using sql server 2005 and asp.net C#.
Is there a two-way encryption method for a column with a data type of int?
Could I use a user-defined-function in sql server 2005 or a possibly a C# method?
I'm sorry but I simply can't see the rationale for encrypting numbers in a database. If you want to protect the data from prying eyes, surely SQL Server has security built into it, yes?
In that case, protect the database with its standard security. If not, get a better DBMS (though I'd be surprised if this were necessary).
If you have bits of information from that table that you want to make available (like some columns but not others), use a view, or a trigger to update another table (less secured), or a periodic transfer to that table.
XOR?
:)
Hmm, need more text...
There are a few two way encryption schemes available in .Net.
Simple insecure two-way "obfuscation" for C#
You can either convert the integer to it's byte array equivalent or convert it to a base-64 string and encrypt that.
Well, every injective, surjective function from int to int can be used as a way to "encode" an integer.
You could build such a function by creating a random array with 65536 items with no duplicate entries und using f(i) = a[i]. To "decode" your int you simply create another array with b[i] = x | a[x] = i.
As the others have mentioned, this may not be what you REALLY want to do. =)
Edit: Check out Jim Dennis' comment!
You might want to look at format preserving encryption.