I am developing a C# app where I have to read/write existing MS SQL database. I decided to use object class for the database but the table columns can be changed during runtime and that causes an exception because of an attempt to write a new row (in the case of a new not null column).
Is there any recommendation how to preserve object approach to the database and deal with variable database tables? It is not necessary to have the object updated in the runtime, just to handle the new columns - fill them with a valid default value.
More details to my solution:
I used Data Source Configuration Wizard in VS2015 what generates objects for the database and everything is fine. When a table has a new column I have to run the wizard again to update the objects and define appropriate new value.
I can't modify anything in the database structure (existing ERP system). The database is huge (hundreds of tables, each has around 60+ columns) so I am looking for the automated ways how to generate the database objects.
I hope I just overlooked (as a newbie) some obvious solution.
Thanks for all suggestions in advance.
Petr
I would recommend to do the following:
Create a set of import tables with the needed columns and leave those tables fixed
Let your application copy data to the import tables
Update the production tables on the database from the import tables with a stored procedure
Related
i am creating an application using visual studio that use a database of course. i don't get why we create a data set as i tried some query without creating a data set and it worked perfectly. the queries i will be using are update, delete, insert and select (simple and complex ones).
so should i use the data set and why?
Note the database is a big one, and as i understood creating a data set will create a copy of the database, so will this make a storage (memory) problem?
You are looking for ways without a Dataset..
For INSERT, UPDATE and DELETE can use System.Data.Sql and System.Data.SqlClient, open your own SqlConnection and proceed https://learn.microsoft.com/en-us/dotnet/api/system.data.sqlclient.sqlconnection?view=dotnet-plat-ext-3.1 .... but for reads (SELECT), it is practical to use a DataSet. On this level (below Entity Framework !) the DataSet has a Fill() method, you can fill it with any data you want. The only class I know of that can read without a DataSet is DataReader, refer https://learn.microsoft.com/en-us/dotnet/framework/data/adonet/retrieving-data-using-a-datareader
NOTE: as mensioned in the comments above, these low level methods to access a database are inherently unsafe, because you have to specify and pass explicit SQL to ADO, making user input vulnerable.. This can be avoided using a parametrized DataAdaper+Dataset, or Entity Framework, in either way you can avoid SQL injection.
If you have a big database you should consider using an ORM like Entity Framework. By using the old ADO .Net a dataset would help you handling your data. Inside it you will have a DataTable that represents your data and so, whenever you make a select the rows of your database will be stored temporarily in your dataset/datatable. That said, you don't need all your database there, but only the rows you need to work on.
Every time you add a new record to the datatable, it is inserted flagged as new. When you modify a row, it is flagged as modified or deleted, when it is the case. So after all handling you can save your dataset back to the database.
I've hit a wall when it comes to adding a new entity object (a regular SQL table) to the Data Context using LINQ-to-SQL. This isn't regarding the drag-and-drop method that is cited regularly across many other threads. This method has worked repeatedly without issue.
The end goal is relatively simple. I need to find a way to add a table that gets created during runtime via stored procedure to the current Data Context of the LINQ-to-SQL dbml file. I'll then need to be able to use the regular LINQ query methods/extension methods (InsertOnSubmit(), DeleteOnSubmit(), Where(), Contains(), FirstOrDefault(), etc...) on this new table object through the existing Data Context. Essentially, I need to find a way to procedurally create the code that would otherwise be automatically generated when you do use the drag-and-drop method during development (when the application isn't running), but have it generate this same code while the application is running via command and/or event trigger.
More Detail
There's one table that gets used a lot and, over the course of an entire year, collects many thousands of rows. Each row contains a timestamp and this table needs to be divided into multiple tables based on the year that the row was added.
Current Solution (using one table)
Single table with tens of thousands of rows which are constantly queried against.
Table is added to Data Context during development using drag-and-drop, so there are no additional coding issues
Significant performance decrease over time
Goals (using multiple tables)
(Complete) While the application is running, use C# code to check if a table for the current year already exists. If it does, no action is taken. If not, a new table gets created using a stored procedure with the current year as a prefix on the table name (2017_TableName, 2018_TableName, 2019_TableName, and so on...).
(Incomplete) While the application is still running, add the newly created table to the active LINQ-to-SQL Data Context (the same code that would otherwise be added using drag-and-drop during development).
(Incomplete) Run regular LINQ queries against the newly added table.
Final Thoughts
Other than the above, my only other concern is how to write the C# code that references a table that may or may not already exist. Is it possible to use a variable in place of the standard 'DB_DataContext.2019_TableName' methodology in order to actually get the table's data into a UI control? Is there a way to simply create an Enumerable of all the tables where the name is prefixed with a year and then select the most current table?
From what I've read so far, the most likely solution seems to involve the use of a SQL add-on like SQLMetal or Huagati which (based solely from what I've read) will generate the code I need during runtime and update the corresponding dbml file. I have no experience using these types of add-ons, so any additional insight into these would be appreciated.
Lastly, I've seen some references to LINQ-to-Entities and/or LINQ-to-Objects. Would these be the components I'm looking for?
Thanks for reading through a rather lengthy first post. Any comments/criticisms are welcome.
The simplest way to achieve what you want is to redirect in SQL Server, and leave your client code alone. At design-time create your L2S Data Context, or EF DbContex referencing a database with only a single table. Then at run-time substitue a view or synonym for that table that points to the "current year" table.
HOWEVER this should not be necessary in the first place. SQL Server supports partitioning, so you can store all the data in a physically separate data structures, but have a single logical table. And SQL Server supports columnstore tables, which can compress and store many millions of rows with excellent performance.
I am currently in the process of creating a SQL Server CE database application in C# and I am having some logic issues that I thought maybe someone could help with.
Objective: to be able to supply an XML file to the end user, which tells the program to create a new set of tables using the supplied structure (new tables with tmp_ prefix). Existing data then needs to be moved from the old tables to the new ones (with new structure), then the old tables need to be dropped.
I've written too much code to be able to paste it here, so I'm going to break it down into logical steps (as it is a logical issue, not a compiler issue).
Get new database structure from supplied XML file, read into datatable [DONE]
Dynamically concatenate a SQL query to create new table with tmp_ prefix [DONE]
Compare new structure with old structure, move relevant data across [NOT DONE]
I am having problems with the logical approach to step 3. Basically I need to move data from an old structure to the new structure - ignoring old columns which do not appear in the new set of columns, and entering blank data for new columns which do not appear in the list of old columns. I have need to adhere to the new column schema, such as datatype, max length, etc etc. This is seriously making my head hurt as I'm very new to C#. Does anyone have ideas as the best way to approach this?
Thanks in advance!
I need to create a tool that is able to merge clients production databases.
Usually these databases will have the same schema (I'll do some check's later on, but for now we'll assume it is). Filtering of duplicate data is something for later on too.
This needs to be done automaticly(so no script generation via SSMS etc).
I've already had to start over again a couple of times because every time I ran into problems for things I didn't think off, so this time I wanted to ask you guys for advice before I begin all over again.
My current plan of action is:
Copy schema from database 1(later on I'll add some checks here for
when the schema is different
Loop over all tables and set all foreign key updates to cascade, and
set the order in which the tabledata needs to be inserted (so the
tables containing the PK's first, then the tables holding the FK's)
Loop every table in the correct order
Check in database 2 table for identity column, if so, retrieve the
current seed value from the corresponding table in database 1, drop
identity property on database 2 table and update each ID to newID =
currentID + seed(to avoid duplicate primary keys later on)
Generate insert script(SMO's Table.EnumScript) for database 1 table
Generate insert script(SMO's Table.EnumScript) for database 2 table
Execute every line in database 1 insert script on the new database
Execute every line in database 2 insert script(which now has primary
keys/identity field data that will follow the ones in database 1) on the new database
Go to next table
Everything was working when testing(disabling the identity property in SSMS, created a T-SQL script to update every row with the given seed,..)
But the problem now is automating this in C#, more specific the disabling of the identity property. There doesn't seem to be a clean solution for this. Creating a new table and rebuilding every constraint etc seems like the wrong way, because the only reason I need it is to cascade every FK so everything still points to the correct place..
Another way would be to delay the updating of the identity-column-data, and change it after script generation and before insertion in the new database. But then I'd need to know which data points to which other data, while everything is still in strings(insertscript)?
Any suggestions,thoughts or techniques on how to handle this?
I know about Red Gate 's SQL compare, and it is indeed wonderfull, but need to program it myself.
Using: SMO, SQL Server 2005 - 2008R2(no developers or enterprise edition on client servers), ADO.NET , C#, .NET framework 2.0, Visual Studio 2008
I am not sure exactly what you are trying to accomplish with your process here, but managing Database versions is something that I have a keen interest in.
Have a look at DBSourceTools ( http://dbsourcetools.codeplex.com ).
It is a utility to script an entire database to disk, including all foreign key constraints and data.
Using Deployment Targets, you will then be able to re-create these databases on another database server (usually local machine).
The tool will handle dependencies and large database tables using Sql Bulk insert - trying to generate a script with 50,000 insert statements will be a nightmare.
Have fun.
Disclaimer: I am involved in the http://dbsourcetools.codeplex.com project.
I have several installations of my Linq-to-Sql app running in the field. Now I've created a new version, which adds a new column to a certain table. I've added this column in the dbml file. But when updating the installation, I want to preserve the existing database. How to handle this? Linq-to-SQL doesn't seem to like this inconsistency.
Is there an easy way to update the existing database using my new dbml file?
You need to manage your database schema explicitly - that is to say that you should have creation and update of the database schema scripted so that its repeatable. For the scenario your describing I think that your application should (ideally) create and then update the database schema as required. The initial work to set this up isn't too hard and once you have the system in place making schema changes is straightforward
I wrote this up (in terms of what has worked for me for a lot of years now) at some length here:
How to create "embedded" SQL 2008 database file if it doesn't exist?
Which probably ought to be modified to take advantage of this which talks about using database extended properties:
SQL Server Database schema versioning and update