I'm trying to store a value with three decimals in an MsSQL database, but for some reason the third decimal is always changed to a 0. I can't figure out why.
The project was created by somebody else with Entity Framework code first v4.0.30319 and it created a column in the database [Amount] [decimal](18, 2) NULL. I have manually changed the database to [Amount] [decimal](18, 3) NULL
In the code:
var _yourbid = form["YourBid"]; // value = 10.123
decimal yourbid;
if (decimal.TryParse(_yourbid, out yourbid))
{
Bid b = new Bid();
b.Amount = yourbid;
//yourbid = 10.123
//b.Amount = 10.123
Db.Bids.Add(b);
Db.Save();
//in database is 10.120
}
Now I expected that the code first somewhere declared the decimal to have a scale of 2, but I couldn't find anything. I checked the options listed in Decimal precision and scale in EF Code First but it's not used.
There isn't a trigger on the database that might be changing it either. I can put in the right value directly from SQL
I must be missing something obvious, but I hope you can point me in the right direction.
This is likely because your Entity Framework model and the underlying database no longer match. For an Entity Framework Code First project, you should update your model first and then use the migration feature to propagate the change to the database. Before this you should change the Amount field on the DB table to have a precision of 2 again so the difference can be detected.
To update your model, see this SO answer on how to customise the precision of a decimal field:
Decimal precision and scale in EF Code First
(Things are easier in later versions of EF, so you may want to consider upgrading at some point in the future.)
You should then add a migration, which will record the SQL actions to apply to the database. Use Add-Migration in the Package Manager Console to do this scaffolding step. Finally, you should also Update-Database to execute the change on the target database.
There's more background information on the web, including this tutorial from Microsoft: https://msdn.microsoft.com/en-gb/data/jj591621.aspx
Edit:
Here's some migration code which will perform the precision change. It may be required to create this manually if you're running an older version of EF and can't use HasPrecision:
public partial class PrecisionChange : DbMigration
{
public override void Up()
{
AlterColumn("dbo.SomeTable", "Amount", (c => c.Decimal(false, 18, 3)));
}
public override void Down()
{
AlterColumn("dbo.SomeTable", "Amount", (c => c.Decimal(false, 18, 2)));
}
}
Related
I have a hard time understanding why Entity Framework skips columns that are defined as decimal. I've tried deleting the model a couple of times and adding it back using the database first approach but for some reason a few columns are not mapped.
The ones that are defined as Date, Int or Text have absolutely no problem. The ones that are giving me a hard time are the decimal ones and I have defined them as such:
Name: Hours, Datatype: Decimal, Length/Set: 10,2, Unsigned: Checked, Allow Null: Unckecked, Zerofill: Unchecked, Default value: 0.00.
If I create a view with sums based on that same table, EF has no problem identifying the decimal columns. How can I add the missing columns to my model? What am I doing wrong and is there a workaround?
Thank you
After spending hours on this I finally figured it out. If you are facing the same problem make sure that your column are not UNSIGNED.
For some reason Entity Framework does not map decimal columns that are unsigned. Just uncheck that option and you should be good.
I am using Entity Framework 5 code first to talk to an Oracle 11g or 12c database (I've verified the problem in both versions). The Oracle field is a FLOAT type while the field in my POCO is a decimal. I'm using decimal because I need my decimal places to be accurate up to 5 digits.
When I save my entity, the resulting record in the database always rounds the value to 2 decimal places.
I have verified through a database tool (Toad) that the column will support precision of 5. I cannot change the data type in the database due to backwards compatibility. I have found that using a double does not have the same problem. But double is notorious for giving inexact numbers, especially when multiple mathematical operations are performed.
Does anyone know why a decimal value would be truncated? I am using the Oracle data provider.
The link provided by #Grant in the comments above provided the answer. I will paraphrase here. The default mapping for a decimal value is to an Oracle DECIMAL(18,2). This is why it was rounding to two decimal places. In order to change the default behavior, you have to add a statement in the OnModelCreating override inside the Context class. (In EF6 you can change the convention for all of the decimal fields at once as noted here.)
Change Decimal Mapping for a Particular Decimal Field
protected override void OnModelCreating(DbModelBuilder modelBuilder)
{
//...
modelBuilder.Entity<MyEntity>().Property(x => x.MyProperty).HasPrecision(18, 5);
//...
}
I'm fighting trough with Entity Framework 6 and MySQl Database
I got everything to work, however I'm confused with dates or not obligatory values.
In my database, in "Users" table I have column "RegistrationDate" which has default value of "CURRENT_TIMESTAMP"
what is mean that if value not provided at insertion time it will insert default value = date time of the server
I got my schema reverse engineered into C# and all perfectly works, however when I insert "User" without setting a date to "RegistrationDate" property, it insert into Database new date as "0001-01-01 00:00:00" and ignore "CURRENT_TIMESTAMP".
So im wondering how to set it to ignore "RegistrationDate" and do not insert anything into db if it wasn't specifically set to some date?
I have a guess that the SQL EF generates is setting the field value. Even if you don't set in code, EF doesn't know that the database has a default value, and doesn't know that he should ignore it.
This article, from 2011, says that there is a DatabaseGenerated attribute, which you could use like this:
[DatabaseGenerated(DatabaseGenerationOption.Computed)]
public DateTime RegistrationDate { get; set; }
So, EF now knows that it should retrieve the data when you query the database, but should rely on the database to set the value.
However, I don't know what it would do if you explicitly set the value. Maybe it will ignore it, which may be not what you really want.
I didn't test it, it's just a guess, but it's a nice solution in my opinion.
[Edit1] Some months ago, I saw this video, and the guy does something like this in his DbContext class (i believe you have it) at 49:12 (the video is in portuguese)(i have modified the code, but didn't test it):
//This method will be called for every change you do - performance may be a concern
public override int SaveChanges()
{
//Every entity that has a particular property
foreach (var entry in ChangeTracker.Entries().Where(entry => entry.Entity.GetType().GetProperty("YourDateField") != null))
{
if (entry.State == EntityState.Added)
{
var date = entry.Property("YourDateField");
//I guess that if it's 0001-01-01 00:00:00, you want it to be DateTime.Now, right?
//Of course you may want to verify if the value really is a DateTime - but for the sake of brevity, I wont.
if (date.CurrentValue == default(DateTime))
{
date.CurrentValue = DateTime.Now;
}
else //else what?
{
//Well, you don't really want to change this. It's the value you have set. But i'll leave it so you can see that the possibilities are infinite!
}
}
if (entry.State == EntryState.Modified)
{
//If it's modified, maybe you want to do the same thing.
//It's up to you, I would verify if the field has been set (with the default value cheking)
//and if it hasn't been set, I would add this:
date.IsModified = false;
//So EF would ignore it on the update SQL statement.
}
}
}
I think many of us have been caught out by default database values when dealing with EF - it doesn't take them into account (there are many questions on this - e.g. Entity Framework - default values doesn't set in sql server table )
I'd say if you haven't explicitly set a datetime and want it to be null, you'll need to do it in code.
Scenario:
I have an application (C#) that expects a SQL database and login, which are set by a user. Once connected, it checks for the existence of several table and creates them if not found.
I'd like to expand on this by having the program be capable of adding columns to those tables if I release a new version of the program which relies upon the new columns.
Question:
What is the best way to programatically check the structure of an existing SQL table and create or update it to match an expected structure?
I am planning to iterate through the list of required columns and alter the existing table whenever it does not contain the new column. I can't help but wonder if there's an approach that is different or better.
Criteria:
Here are some of my expectations and self-imposed rules:
Newer versions of the program might no longer use certain columns, but they would be retained for data logging purposes. In other words, no columns will be removed.
Existing data in the table must be preserved, so the table cannot simply be dropped and recreated.
In all cases, newly added columns would allow null data, so the population of old records is taken care of by having default null values.
Example:
Here is a sample table (because visual examples help!):
id datetime sensor_name sensor_status x1 x2 x3 x4
1 20100513T151907 na019 OK 0.01 0.21 1.41 1.22
2 20100513T152907 na019 OK 0.02 0.23 1.45 1.52
Then, in a new version, I may want to add the column x5. The "x-columns" are all data-storage columns that accept null.
Edit:
I updated the sample table above. It is more of a log and not a parent table. So the sensors will repeatedly show up in this logging table with the values logged. A separate parent table contains the geographic and other logistical information about the sensor, making the table I wish to modify a child table.
This is a very troublesome feature that you're thinking about implementing. i would advise against it and instead consider scripting changes using a 3rd party tool such as Red Gate's Sql Compare: http://www.red-gate.com/products/SQL_Compare/index.htm
If you're in doubt, consider downloading the trial version of the software and performing a structure diff script on two databases with some non-trivial differences. You'll see from the result that the considerations for such operations are far from simple.
The other way around this type of issue is to redesign your database using the EAV model: http://en.wikipedia.org/wiki/Entity-attribute-value_model (Pivots to dynamically add rows thus never changing the structure. It has its own issues but it's very flexible.)
(To utilize a diff tool you would have to have a copy of all of your db versions and create diff scripts which would go out and get executed with new releases and upgrades. That's a huge mess of its own to maintain. EAV is the way for a thing like this. It wrongfully gets a lot of flak for not being as performant as a traditional db structure but i've used it a number of times with great success. In fact, i have an HIPAA-compliant EAV db (Sql Server 2000) that's been in production for over six years with several of the EAV tables containing tens or millions of rows and it's still going strong w/ no big slow down. Of course we don't do heavy reporting against that db. For reports we have an export that flattens the data into a relational structure.)
The common solution i see would be to store in your database somewhere version information. maybe have a really small table:
CREATE TABLE DB_PROPERTIES (key varchar(100), value varchar(100));
then you could add a row:
key | value
version | 12
Then you could just create a sql update script (or set of scripts) which updates the db from version 12 to version13.
declare v varchar(100)
select v=value from DB_PROPERTIES where key='version'
if v ='12'
#do upgrade from 12 to 13
elsif v='11'
#do upgrade from 11 to 13
...and so on
depending on what upgrade paths you wanted to support you could add more cases. You could also obviously move this upgrade logic into C# and or whatever design works for you. But having the db version information stored in the database will make it much easier to figure out what is already there, rather than querying for all the db structures individually.
If you have to build something in such a way as to rely on the application making table changes, your design is flawed. You should have a related table for the sensor values (x1, x2, etc.), then you can just add another record rather than having to create a new column.
Suggested child table structure
READINGS
ID int
Reading_type varchar (10)
Reading_Value int
Then data in the table would read:
ID Reading_type Reading_value
1 x1 2
1 x2 3
1 x3 1
2 x1 7
Try Microsoft.SqlServer.Management.Smo
These are a set of C# classes that provide an API to SQL Server database objects.
The Microsoft.SqlServer.Management.Smo.Table has a Columns Collection that will allow you to query and manipulate the columns.
Have fun.
Let's say there's an application which should create its own tables in main database if they are missing (for example application is run for a very first time). What way of doing this is more flexible, scalable and, let's say, more suitable for commercial product?
If I code it all no additional files (scripts) are needed. User won't be able to make something stupid with them and then complain that application doesn't work. But when something will change in db structure I have to code patching part and user will have to install new binary (or just replace the old one).
Scripting solution would be a few lines of code for just run all scripts from some directory and bunch of scripts. Binary could be the same, patching would be applied automatically. But new scripts also have to be deployed to user at some point.
So, what would you recommend?
Application will be coded in c#, database for now will be under SQLServer 2005 but it may change in the future. Of course, drawing application and database handling part can be separated into two binaries/assemblies but it doesn't solves my code vs. scripts dilemma.
Check Wizardby: it provides a special language (somewhat close to SQL DDL) to express changes to your database schema:
migration "Blog" revision => 1:
type-aliases:
type-alias N type => String, length => 200, nullable => false, default => ""
defaults:
default-primary-key ID type => Int32, nullable => false, identity => true
version 20090226100407:
add table Author: /* Primary Key is added automatically */
FirstName type => N /* “add” can be omitted */
LastName type => N
EmailAddress type => N, unique => true /* "unique => true" will create UQ_EmailAddress index */
Login type => N, unique => true
Password type => Binary, length => 64, nullable => true
index UQ_LoginEmailAddress unique => true, columns => [[Login, asc], EmailAddress]
add table Tag:
Name type => N
add table Blog:
Name type => N
Description type => String, nullable => false
add table BlogPost:
Title type => N
Slug type => N
BlogID references => Blog /* Column type is inferred automatically */
AuthorID:
reference pk-table => Author
These version blocks are basically changes you want to be applied to your database schema.
Wizardby can also be integrated into your build process, as well as into your application: on each startup in can attempt to upgrade database to the most recent version. Thus your application will always work with most up-to-date schema version.
It can also be integrated into your setup process: Wizardby can generate SQL scripts to alter database schema, and these can be run as part of your setup process.
I would usually want to keep my installation code separate from my application code. You will definitely want your application performing some sort of version check against the database to ensure it has the proper structure it requires before running though. The basic setup I would follow is this:
Use scripts with each released
version to make schema changes to
the deployed database.
Have something in your database to track current version of the
database, perhaps a simple version
table that tracks which scripts have
been run against it. It's simpler
to look for the version script than
to check the schema every time and
search for all the tables and fields
you need.
Have your application check the database version marker to ensure it
meets the application's version.
Log an error that will let the user
know they have to update their
database with the database scripts.
That should keep your application code clean, but make sure database and app are in sync.
Then using a versioning tool like DBGhost or DVC would help you to maintain database updates and could be modified to fit seamlessly into your application.