We have an SQL Server with a lot of tables. In an effort to clean up the server we are trying to find which tables are actually being used. We have a list of tables on the server and now we would like to create a list of tables being used by our code.
We are using C# in Visual Studio 2013. Is there a way to compile a list of unused tables using Visual Studio? Or a list of tables used int the code--then we could compare to our list of tables.
I noticed in the Data Model (.edmx), when you right click on the diagram field, then go to Update Model from Database, a list of Tables, Views, and Stored Procedures is displayed (in the Update Wizard). Unless I'm mistaken, this all the unused tables from our connected database. Is it possible to export this list somehow?
You probably want to do two things:
Set up a SQL Server Audit on the table(s) to see if any program does anything against the table, especially selects (which triggers cannot watch for). You may have to run this for a year to really be sure that its totally unused.
Run sp_depends or better use sys.dm_sql_referencing_entities (table_name, 'OBJECT') which will search through the entire database to see if any objects refers to this table.
exec sp_msforeachtable 'SELECT referencing_schema_name, referencing_entity_name, referencing_id, referencing_class_desc, is_caller_dependent
FROM sys.dm_sql_referencing_entities (''?'', ''OBJECT'');'
Related
I've hit a wall when it comes to adding a new entity object (a regular SQL table) to the Data Context using LINQ-to-SQL. This isn't regarding the drag-and-drop method that is cited regularly across many other threads. This method has worked repeatedly without issue.
The end goal is relatively simple. I need to find a way to add a table that gets created during runtime via stored procedure to the current Data Context of the LINQ-to-SQL dbml file. I'll then need to be able to use the regular LINQ query methods/extension methods (InsertOnSubmit(), DeleteOnSubmit(), Where(), Contains(), FirstOrDefault(), etc...) on this new table object through the existing Data Context. Essentially, I need to find a way to procedurally create the code that would otherwise be automatically generated when you do use the drag-and-drop method during development (when the application isn't running), but have it generate this same code while the application is running via command and/or event trigger.
More Detail
There's one table that gets used a lot and, over the course of an entire year, collects many thousands of rows. Each row contains a timestamp and this table needs to be divided into multiple tables based on the year that the row was added.
Current Solution (using one table)
Single table with tens of thousands of rows which are constantly queried against.
Table is added to Data Context during development using drag-and-drop, so there are no additional coding issues
Significant performance decrease over time
Goals (using multiple tables)
(Complete) While the application is running, use C# code to check if a table for the current year already exists. If it does, no action is taken. If not, a new table gets created using a stored procedure with the current year as a prefix on the table name (2017_TableName, 2018_TableName, 2019_TableName, and so on...).
(Incomplete) While the application is still running, add the newly created table to the active LINQ-to-SQL Data Context (the same code that would otherwise be added using drag-and-drop during development).
(Incomplete) Run regular LINQ queries against the newly added table.
Final Thoughts
Other than the above, my only other concern is how to write the C# code that references a table that may or may not already exist. Is it possible to use a variable in place of the standard 'DB_DataContext.2019_TableName' methodology in order to actually get the table's data into a UI control? Is there a way to simply create an Enumerable of all the tables where the name is prefixed with a year and then select the most current table?
From what I've read so far, the most likely solution seems to involve the use of a SQL add-on like SQLMetal or Huagati which (based solely from what I've read) will generate the code I need during runtime and update the corresponding dbml file. I have no experience using these types of add-ons, so any additional insight into these would be appreciated.
Lastly, I've seen some references to LINQ-to-Entities and/or LINQ-to-Objects. Would these be the components I'm looking for?
Thanks for reading through a rather lengthy first post. Any comments/criticisms are welcome.
The simplest way to achieve what you want is to redirect in SQL Server, and leave your client code alone. At design-time create your L2S Data Context, or EF DbContex referencing a database with only a single table. Then at run-time substitue a view or synonym for that table that points to the "current year" table.
HOWEVER this should not be necessary in the first place. SQL Server supports partitioning, so you can store all the data in a physically separate data structures, but have a single logical table. And SQL Server supports columnstore tables, which can compress and store many millions of rows with excellent performance.
I have a MVC4 + EntityFramework Database First application. I have made some changes in my local database (added table and columns in couple of tables). After this I updated my .edmx file and ran the custom tool. This has updated my models of the table whose schema i have changed. Everything is working fine.
I want to know, How to reflect those local database changes on my Test database?
Within VS you can compare the two databases and generate a change script that will bring them on par with each other.
In your case you want to do a schema compare.
Tools -> SQL Server -> New Schema Comparison.
You select your local database and then select the test database. it will compare the schema of both and show you the differences. You can select which you want to apply and either apply it directly or generate a change script and execute when you want from SSMS.
Compare and Synchronize Database Schemas
In SQL Server Management Studio you can export the database script to drop and recreate the tables, indexes, relationships, etc... You can also use Visual Studio to export the SQL script to create all the tables. The other way is to manually add the updated columns. I'm unsure of any other ways to do this in a database first approach.
Try SQL Server Data Tools for Visual studio :
https://msdn.microsoft.com/en-us/mt186501.aspx
Very easy and intuitive to use for Schema comparison and generating delta scripts.
I suggest you have a look at xSQL Schema Compare. It's a great tool for comparing and synchronizing database schemas. It should do the trick in your case. Also, this comes bundled with xSQL Data Compare which you can use to compare and synchronize the data as well, should the need to do so arise.
I'm writing a program in C# that will grab data from a staging table, and then insert that same data back into their new respective locations in a SQL Server database. The program will do the following steps sequentially:
Select columns from first row of staging table
Store each column as unique variable
Insert data into new respective locations in the database (each value is going to multiple different tables in the DB, and the values are duplicated between many of the tables)
Move to the next Record
Repeat from step 1 until all records have been processed
So is there a way to iterate through the entire record set, storing each result from a column as a unique variable without having to write separate queries for each value that you want to store? There are 51 columns that all have to go somewhere, and I didn't think it would be very efficient to hardcode 51 variables each with a custom query to the database.
I thought about doing this with a multidimensional array, but then that would just be one string with a ton of values. Any advice would be greatly appreciated.
Although you can do this through a .NET application, really this would be much easier to achieve with a SQL statement. SQL has good syntax for moving data between tables:
INSERT INTO [Destination] ([Columns,])
SELECT [Columns,]
FROM [Source]
If you're moving data between databases, you just need to link one of the databases to the other and then run the query. If you're using SQL Server Management Studio, you can follow this article to set up linked servers. Otherwise, you can use the sp_addlinkedserver procedure to register the linked server.
You can create a class that contains a property for each column in your table and use a micro ORM like Dapper to populate a list of instances of those classes from your database. You can then iterate over the list and do your inserts to other tables.
You could even create other classes for your individual inserts and use AutoMapper to create instances of those from your source class.
But... this might all be overkill for what you are trying to achieve.
I need to create a tool that is able to merge clients production databases.
Usually these databases will have the same schema (I'll do some check's later on, but for now we'll assume it is). Filtering of duplicate data is something for later on too.
This needs to be done automaticly(so no script generation via SSMS etc).
I've already had to start over again a couple of times because every time I ran into problems for things I didn't think off, so this time I wanted to ask you guys for advice before I begin all over again.
My current plan of action is:
Copy schema from database 1(later on I'll add some checks here for
when the schema is different
Loop over all tables and set all foreign key updates to cascade, and
set the order in which the tabledata needs to be inserted (so the
tables containing the PK's first, then the tables holding the FK's)
Loop every table in the correct order
Check in database 2 table for identity column, if so, retrieve the
current seed value from the corresponding table in database 1, drop
identity property on database 2 table and update each ID to newID =
currentID + seed(to avoid duplicate primary keys later on)
Generate insert script(SMO's Table.EnumScript) for database 1 table
Generate insert script(SMO's Table.EnumScript) for database 2 table
Execute every line in database 1 insert script on the new database
Execute every line in database 2 insert script(which now has primary
keys/identity field data that will follow the ones in database 1) on the new database
Go to next table
Everything was working when testing(disabling the identity property in SSMS, created a T-SQL script to update every row with the given seed,..)
But the problem now is automating this in C#, more specific the disabling of the identity property. There doesn't seem to be a clean solution for this. Creating a new table and rebuilding every constraint etc seems like the wrong way, because the only reason I need it is to cascade every FK so everything still points to the correct place..
Another way would be to delay the updating of the identity-column-data, and change it after script generation and before insertion in the new database. But then I'd need to know which data points to which other data, while everything is still in strings(insertscript)?
Any suggestions,thoughts or techniques on how to handle this?
I know about Red Gate 's SQL compare, and it is indeed wonderfull, but need to program it myself.
Using: SMO, SQL Server 2005 - 2008R2(no developers or enterprise edition on client servers), ADO.NET , C#, .NET framework 2.0, Visual Studio 2008
I am not sure exactly what you are trying to accomplish with your process here, but managing Database versions is something that I have a keen interest in.
Have a look at DBSourceTools ( http://dbsourcetools.codeplex.com ).
It is a utility to script an entire database to disk, including all foreign key constraints and data.
Using Deployment Targets, you will then be able to re-create these databases on another database server (usually local machine).
The tool will handle dependencies and large database tables using Sql Bulk insert - trying to generate a script with 50,000 insert statements will be a nightmare.
Have fun.
Disclaimer: I am involved in the http://dbsourcetools.codeplex.com project.
Hopefully I can explain what I am trying to do.
I am writing a system to take data stored in Sharepoint lists and push them into SQL tables. This is being done so the data from the lists can be joined with other data and reported on.
I need the system to be quite flexible so I want to store the mapping between the lists and SQL and then create any of the SQL that is missing.
So I would first have to check if the SQL table I want exists and if not create it. Then check all the columns I expect and create an missing ones then populate the table with the list data.
Getting the list data is no problem to me and it isn't a problem for me to store by configuration information.
My issue is I'm not sure what .NET features to use when talking to the database. I was looking into the entity framework and LINQ but these seem to need fixed tables which I don't have.
I am also looking at using the enterprise libraries (4.1) as I use these for event logging.
Ideally what I want to be able to do is build a datatable and then "compare" it to a SQL table and have the system update it as required.
Does any thing like this exist and what approach would you use.
These may help get you started :-
Codeplex - SPListSync
Synchronize information with other
lists or SQL Server table based on a
linked column. This can be helpfull
when having list with companies and
another list with contacts. The
company-information (e.g. Business
phone and address) can be copied to
the linked contacts.
Exporting Data from SharePoint 2007 Lists to SQL Server via SSIS
SO - Easiest way to extract SharePoint list data to a separate SQL Server table?
Commercial
Simego - Data Synchronisation Studio
AxioWorks SQList
You need to bit Study SQL Server Management Objects, through which you can directly interact with SQL Server very easily. Through this you can create New Table, Stored Procedure etc and also check pre-existance of any object.
Talking to Database like this was never so easy...