Are WAMS tables decoratable? - c#

With local embedded SQLite table classes, one can decorate columns with attributes such as PrimaryKey, AutoInc, MaxLength(NN), and Indexed. It would seem that the more big metal-ish WAMS (nee SQL-Azure) tables, being, as they are, basically MS SQL Server tables, would provide that capability, too.
But the only thing that I see that's possible from the WAMS portal is to "Set Index" on the selected column.
Is it possible now to decorate columns with additional attributes (something I'm unaware of), or is this a feature planned for the future?

Better late than never. I'm not sure if it's a coming feature or not, however, with the database in SQL Azure (or SQL Database), you can certainly manipulate the underlying table. I'd be cautious in doing so to make sure you don't make breaking changes, but you could certainly add indexes and constraints to a column.
Also, something I've done is make use of stored procedures and functions, and then calling them from the WAMS script.

Related

Vendor-agnostic way of retrieving the schema of a database table

I'm currently building an application at work where users get to define various ways in which pieces of data are routed to various storage technologies. Those include traditional relational database systems.
We'd like to give feedback to users if the way they've configured this does not work with the defined database schema, i.e. if the column types don't match.
I've been looking for a solid vendor-agnostic way of retrieving the datatypes of a database table, ideally including the CLR types they map to.
So far I've struggling to find anything even remotely decent. Much of the solutions I stumbled upon are not vendor-agnostic, and much of the tooling regarding database technologies included in .NET (Core) are specific to SQL Server.
The most popular way seems to be via the GetSchema method on an IDbConnection object, but that one is also riddled with implementation specific details, and does not give a very pleasant to use result. I've been able to retrieve textual representations for each of the types, and for Postgres for example, the closest I've come is actual human-readable descriptions of the types. VARCHAR was displayed as "Varying length character string", which is hard to parse.
Most database interaction libraries for .NET (Core) abstract away the primitives like DataSet, DataTable, DataReader etc, and usually directly map to objects, thereby removing any use I could have had for them.
What is the easiest way to get an overview of a table schema?
For clarity's sake, we're looking to support the following database technologies for now:
SQL Server
PostgreSQL
MySQL / MariaDB
SQLite
Oracle RDMBS
Thanks!
This does sound like something that you have to pay for, because it is such a narrow use-case, if it even exists. I have a hard time believing this would be a maintained open-source project.
When that is said, maybe you can go around it by querying the database directly using something like this:
select *
from INFORMATION_SCHEMA.COLUMNS
where TABLE_NAME='tableName'
Taken from https://stackoverflow.com/a/18298685/1387545
I checked and it seems to work for at least the first two databases. I think finding some kind of SQL query is your best bet of a generic solution. Since SQL is the technology that they share.
But then again, I think you will obtain a better result by building your own specific parser for the database tables for each database. It of course all depends on time and budget.

How to find any missing columns, constraints, indexes on a database as compared to another one

I have an c#.net windows based application that uses a database in Microsoft SQL Server 2008. During deployment for very first time to our client(s), we create a copy of our database and deploy it on client(s) remote server along with the UI application. The client database can be on version SQL Server 2005 and higher.
During times the UI application and associated database has gone lots of changes. Since this is a thick client application the client(s) database is not sync with our latest database and unfortunately no one ever made notes of all the changes done. So my challenges are as follows:
How to find any missing columns on database table in Client's Database as compared to my Database? if any?
How to find any missing Primary/Unique Constraints on database table in Client's Database as compared to my Database? if any?
How to find any missing Indexes on database table that exist in Client's Database as compared to my Database? if any?
Please keep in mind the client(s) database size may ranges from 10-100GB, so i cannot plan to just drop all client tables and recreate it.
You can use Data-tier applications. It's built-in feature of SQL Server, so you don't need to use any extra tools.
You can extract data-tier application from your database (in SSMS right-click -> Tasks -> Extract data-tier application) to a DACPAC file, copy the file to the client's server and use it to upgrade the DB there (or generate update script).
It also integrates nicely with SQL Server Data Tools.
For this task, you need a software that compare SQL database. Just like there is a lot of software to compare text, there is a lot to compare database.
Personally, I use AdoptSQLDiff, but there is a bunch. RedGate has developed one also and I know others exists. Just type SQL Database compare in google to find them. You probably can have the job done with the trial period.
These softwares show you which tables was added, deleted or changed. It does the same for views, indexes, triggers, Stored Procedures, User Defined Functions, Constraints. More importantly, those tools generate script to push modifications into the target database. Very handy, but have a look at the script generated, it sometime messes it up by deleting data, but it can be fixed very easily.
There is also the option to compare data in a specific table if you need to.
Here is a screen shot of the interface of another so you know what it's look like.
With SQLServer Management Studio, you can try selecting a database and then Task->Generate Script, selecting appropriate options.
Do the same thing for the 2 db you want to compare. You will get two text files you can compare with a text file software comparer.
Comparison will highlight difference in the db structure.
Not the best way to do it, of course. But it can be a start. If the two dbs are not too different, you should be able to handle the differences
Better option, use some db comparer software. They are meant to compare db structure, constraint indexes and so on. Never used any of them, so cannot give any advice on that
If it is one time thing use any diff tool for DB, VS2010+ has a build in one, allows you to get difference for schema and data in two different files.
If you want to solve problem of your development process, you have wide range of options to implement versioning for data base.
If you are using EF - use Migrations, can't beat that.
If you are only on SQL Server and never looking at other RDBMS, check DAC ( Data-Tier applications, mentioned by Jakub)
Otherwise take a look at more generic solutions, among them I would reccomend you to take a look at DB.UP and if python code is good for you , check Alembic, it allow you to write your migrations using really nice python API.
if nothing works for you, create snapshot of current db schema and start doing differential scripts that you can use with self written tool or DB.UP
I am not sure if this can help, but who knows.
So is there any way to restore the server database on your local environment? If the answer is yes, you can try to join system views for each database and compare them?
I propose something like this(was a quick solution, so please sorry for formatting and other common stuff).
USE [master]
GO
SELECT
LocalDataBaseTable.name AS TableName,
LocalDataBaseTableColumns.name AS [Column],
LocalDataBaseTypes.name AS DataType,
LocalDataBaseTableColumns.max_length,
LocalDataBaseTableColumns.[precision]
INTO #tmpLocalInfo
FROM LocalTable.sys.columns as LocalDataBaseTableColumns
INNER JOIN LocalTable.sys.tables AS LocalDataBaseTable
ON LocalDataBaseTableColumns.object_id = LocalDataBaseTable.object_id
INNER JOIN LocalTable.sys.types AS LocalDataBaseTypes
ON LocalDataBaseTypes.user_type_id = LocalDataBaseTableColumns.user_type_id
SELECT
ServerDataBaseTable.name AS TableName,
ServerDataBaseTableColumns.name AS [Column],
ServerDataBaseTypes.name AS DataType,
ServerDataBaseTableColumns.max_length,
ServerDataBaseTableColumns.[precision]
INTO #tmpServerInfo
FROM ServerTable.sys.columns as ServerDataBaseTableColumns
INNER JOIN ServerTable.sys.tables AS ServerDataBaseTable
ON ServerDataBaseTableColumns.object_id = ServerDataBaseTable.object_id
INNER JOIN ServerTable.sys.types AS ServerDataBaseTypes
ON ServerDataBaseTypes.user_type_id = ServerDataBaseTableColumns.user_type_id
SELECT
#tmpServerInfo.*
FROM #tmpLocalInfo
RIGHT OUTER JOIN #tmpServerInfo
ON #tmpLocalInfo.TableName = #tmpServerInfo.TableName COLLATE DATABASE_DEFAULT
AND #tmpLocalInfo.[Column] = #tmpServerInfo.[Column] COLLATE DATABASE_DEFAULT
WHERE #tmpLocalInfo.[Column] IS NULL
DROP TABLE #tmpLocalInfo
DROP TABLE #tmpServerInfo
This will return all information about missed columns in your local database. The idea is to investigate 'sys' views and to find out if there any suitable solution for you.
You can use this simple script, which show you differences between tables, views, indexes etc.
Compalex is a free lightweight script to compare two database schemas. It
supports MySQL, MS SQL Server and PostgreSQL.
or look at this question Compare two MySQL databases. This question about comparing two MySQL schemas, but some of listed tools supports MSSQL or have MSSQL version (for example http://www.liquibase.org/).
Another answer What is best tool to compare two SQL Server databases (schema and data)?

upsert data for a list of objects to database

Often times, I find myself needing to send a user updated collection of records to a stored procedure. For example, lets say there is a contacts table in the database. On the front end, I display lets say 10 contact records for the user to edit. User makes his changes and hits save.
At that point, I can either call my upsertContact stored procedure 10 times with the user modified data in a loop, or send an XML formatted <contact><firstname>name</firstname><lastname>lname</lastname></contact> with all 10 together to the stored procedure. I always end up doing xml.
Is there any better way to accomplish this. Is the xml method going to break if there are large number of records due to size. If so, how do people achieve this kind of functionality?
FYI, it is usually not just a direct table update so I have not looked into sqldatasource.
Change: Based on the request, the version so far has been SQL 2005, but we are upgrading to 2008 now. So, any new features are welcome. Thanks.
Update : based on this article and the feedback below, i think Table Valued Parameters are the best approach to choose. Also the new merge functionality of sql 2008 is really cool with TVP.
What version of SQL Server? You can use table-valued parameters in SQL Server 2008+ ... they are very powerful even though they are read-only and are going to be less hassle than XML and less trouble than converting to ORM (IMHO). Hit up the following resources:
MSDN : Table-Valued Parameters:
http://msdn.microsoft.com/en-us/library/bb510489%28SQL.100%29.aspx
Erland Sommarskog's Arrays and Lists in SQL Server 2008 / Table-Valued Parameters:
http://www.sommarskog.se/arrays-in-sql-2008.html#TVP_in_TSQL
I would think directly manipulating XML in the database would be more trouble than it is worth to go that route; I would suggest instead making each call separate like you suggest; 10 calls to save each contact.
There are benefits to that approach and drawbacks; obviously, you're having to create the database connection. However, you could simply queue up a bunch of commands to send on one connection.
The Sql Server XML datatype is the same as a VARCHAR(MAX) so it would take a really large changeset to cause it to break.
I have used a similar method in the past when saving XML requests and responses and found no issues with it. Not sure if it's the "best" solution, but "best" is always relative.
It sounds like you could use an Object-Relational-Management(ORM) solution like NHibernate or the Entity Framework. These solutions provide you with the ability to make changes to objects, and have the changes propagated to the database by the ORM provider. This makes them much more flexible than issuing your own sql statements to the database. They also make optimizations like sending all changes in a single transaction over a single connection.

Why creating Tables in run-time (code behind) is bad?

People suggest creating database table dynamically (or, in run-time) should be avoided, with the saying that it is bad practice and will be hard to maintain.
I don't see the reason why, and I don't see difference between creating table and any another SQL query/statement such as SELECT or INSERT. I wrote apps that create, delete and modify database and tables in run time, and so far I do not see any performance issues.
Can anyone explane the cons of creating database and tables in run-time?
Tables are much more complex entities than rows and managing table creation is much more complex than an insert which has to abide by an existing model, the table. True, a table create statement is a standard SQL operation but depending on creating them dynamically smacks of a bad design decisions.
Now, if you just create one or two and that's it, or an entire database dynamically, or from a script once, that might be ok. But if you depend on having to create more and more tables to handle your data you will also need to join more and more and query more and more. One very serious issue I encountered with an app that made use of dynamic table creation is that a single SQL Server query can only involve 255 tables. It's a built-in constraint. (And that's SQL Server, not CE.) It only took a few weeks in production for this limit to be reached resulting in a nonfunctioning application.
And if you get into editing the tables, e.g. adding/dropping columns, then your maintenance headache gets even worse. There's also the matter of binding your db data to your app's logic. Another issue is upgrading production databases. This would really be a challenge if a db had been growing with objects dynamically and you suddenly needed to update the model.
When you need to store data in such a dynamic manner the standard practice is to make use of EAV models. You have fixed tables and your data is added dynamically as rows so your schema does not have to change. There are drawbacks of course but it's generally thought of as better practice.
KMC ,
Remember the following points
What if you want to add or remove a column , you many need to change in the code and compile it agian
what if the database location changes
Developers who are not very good at database can make changes , if you create the schema at the backend , DBA's can take care of it.
If you get any performance issues , it may get tough to debug.
You will need to be a little clearer about what you mean by "creating tables".
One reason to not allow the application to control table creation and deletion is that this is a task that should be handled only by an administrator. You don't want normal users to have the ability to delete whole tables.
Temporary tables ar a different story, and you may need to create temporary tables as part of your queries, but your basic database structure should be managed only by someone with the rights to do so.
sometimes, creating tables dynamically is not the best option security-wise (Google SQL injection), and it would be better using stored procedures and have your insert or update operations occur at the database level by executing the stored procedures in code.

Keeping a history of data changes in database

Every change of data in some row in database should save the previous row data in some kind of history so user can rollback to previous row data state. Is there any good practice for that approach? Tried with DataContract and serializing and deserializing data objects but it becomes little messy with complex objects.
So to be more clear:
I am using NHibernate for data access and want to stay out off database dependency (For testing using SQL server 2005)
What is my intention is to provide data history so every time user can rollback to some previous versions.
An example of usage would be the following:
I have a news article
Somebody make some changes to that article
Main editor see that this news has some typos
It decides to rollback to previous valid version (until the newest version is corrected)
I hope I gave you valid info.
Tables that store changes when the main table changes are called audit tables. You can do this multiple ways:
In the database using triggers: I would recommend this approach because then there is no way that data can change without a record being made. You have to account for 3 types of changes when you do this: Add, Delete, Update. Therefore you need trigger functionality that will work on all three.
Also remember that a transaction can modify multiple records at the same time, so you should work with the full set of modified records, not just the last record (as most people belatedly realize they did).
Control will not be returned to the calling program until the trigger execution is completed. So you should keep the code as light and as fast as possible.
In the middle layer using code: This approach will let you save changes to a different database and possibly take some load off the database. However, a SQL programmer running an UPDATE statement will completely bypass your middle layer and you will not have an audit trail.
Structure of the Audit Table
You will have the following columns:
Autonumber PK, TimeStamp, ActionType + All columns from your original table
and I have done this in the following ways in the past:
Table Structure:
Autonumber PK, TimeStamp, ActionType, TableName, OriginalTableStructureColumns
This structure will mean that you create one audit table per data table saved. The data save and reconstruction is fairly easy to do. I would recommend this approach.
Name Value Pair:
Autonumber PK, TimeStamp, ActionType, TableName, PKColumns, ColumnName, OldValue, NewValue
This structure will let you save any table, but you will have to create name value pairs for each column in your trigger. This is very generic, but expensive. You will also need to write some views to recreate the actual rows by unpivoting the data. This gets to be tedious and is not generally the method followed.
Microsoft have introduced new auditing capabilities into SQL Server 2008. Here's an article describing some of the capabilities and design goals which might help in whichever approach you choose.
MSDN - Auditing in SQL Server 2008
You can use triggers for that.
Here is one example.
AutoAudit is a SQL Server (2005, 2008)
Code-Gen utility that creates Audit
Trail Triggers with:
* Created, Modified, and RowVerwsion (incrementing INT) columns to table
* view to reconstruct deleted rows
* UDF to reconstruct Row History
* Schema Audit Trigger to track schema changes
* Re-code-gens triggers when Alter Table changes the table
http://autoaudit.codeplex.com/
Saving serialized data always gets messy in the end, you're right to stay away from that. The best thing to do is to create a parallel "version" table with the same columns as your main table.
For instance, if you have a table named "book", with columns "id", "name", "author", you could add a table named "book_version" with columns "id", "name", "author", "version_date", "version_user"
Each time you insert or update a record on table "book", your application will also insert into "book_version".
Depending on your database system and the way you database access from your application, you may be able to completely automate this (cfr the Versionable plugin in Doctrine)
One way is to use a DB which supports this natively, like HBase. I wouldn't normally suggest "Change your DB server to get this one feature," but since you don't specify a DB server in your question I'm presuming you mean this as open-ended, and native support in the server is one of the best implementations of this feature.
What database system are you using? If you're using an ACID (atomicity, consistency, isolation, durability) compliant database, can't you just use the inbuilt rollback facility to go back to a previous transaction?
I solved this problem very nice by using NHibernate.Enverse
For those intersted read this:
http://nhforge.org/blogs/nhibernate/archive/2010/07/05/nhibernate-auditing-v3-poor-man-s-envers.aspx

Categories

Resources