We have 2 systems, one Identity Management System that handles authentication and another is an application (say UserApp) (website) that user access. When a user registers, the user account is created in Identity Management System and UserApp database. The data should be in sync between these 2 systems. So the current code does the following when user registers
the data is inserted into database (using Entity Framework)
Account is created in IAM using an API call
Scenarios:
If the database insert is failed then API is not called
If database insert is successful, API fails then we delete the record. Question is what needs to be done if the delete fails, then the data is not in sync.
What is the best way to handle? The application is developed in C# with SQL Server.
You could make use of database transactions. You could create a database connection and open it.. The first line should be BEGIN TRANSACTION. This means any subsequent SQL INSERTS/UPDATES you execute wont be commited until you run the statement COMMIT TRANSACTION.. If you want to roll back the transaction you would call ROLLBACK TRANACTION.
So you could:
Step 01: BEGIN TRANSACTION
Step 02: Perform INSERT Statement.
If the SQL statement succeeds, you know the databse is up and accessible and this step has succeeded. It just that the row has not been commited to the database yet.
Step 03: On success of the INSERT statement, then Call the API
Step 04: If API SUCCEEDS then COMMT TRANSACTION.
Step 05: If API FAILS or there is an exception, then ROLLBACK TRANACTION
That way:
If the SQL statement fails in any way (DB down, T-SQL error, etc), you exit early
If the API call fails in any way, you exit early
You only commit the SQL statement when the INSERT and the API succeeds
If the COMMIT Fails
Now there might be a slim chance the COMMIT fails due to power loss or network outage at that second, etc.. In that case you you would need to call the API to remove/deactivate the user you just created.
Related
I'm trying to shrink a LocalDb with Visual Studio 2017 Community. I have a Win7 client windows form application with a small database (~10MB of data) that results into 150MB database size due to LocalDb free space allocation.
I found this answer (Executing Shrink on SQL Server database using command from linq-to-sql) that suggest to use the following code:
context.Database.ExecuteSqlCommand(
"DBCC SHRINKDATABASE(#file)",
new SqlParameter("#file", DatabaseTools.Instance.DatabasePathName)
);
DatabaseTools.Instance.DatabasePathName returns the filesystem location of my database from a singleton DatabaseTools class instance.
The code runs, but I keep getting this exception:
System.Data.SqlClient.SqlException: 'Cannot perform a shrinkdatabase operation inside a user transaction. Terminate the transaction and reissue the statement.'
I tried COMMIT before, but no success at all. Any idea on how to effectively shrink database from C# code?
Thanks!
As the docs for ExecuteSqlCommand say, "If there isn't an existing local or ambient transaction a new transaction will be used to execute the command.".
This is what's causing your problem, as you cannot call DBCC SHRINKDATABASE in a transaction. Which isn't really surprising, given what it does.
Use the overload that allows you to pass a TransactionalBehavior and specify TransactionalBehavior.DoNotEnsureTransaction:
context.Database.ExecuteSqlCommand(
TransactionalBehavior.DoNotEnsureTransaction,
"DBCC SHRINKDATABASE(#file)",
new SqlParameter("#file", DatabaseTools.Instance.DatabasePathName)
);
I am trying to use SQLDependency in my windows application and have followed the steps defined How can I notify my program when the database has been updated? and http://dotnet.dzone.com/articles/c-sqldependency-monitoring
I have enabled Service Broker, set up the queue, created a servie on queue:
ALTER DATABASE [Company] SET ENABLE_BROKER;
CREATE QUEUE ContactChangeMessages;
CREATE SERVICE ContactChangeNotifications
ON QUEUE ContactChangeMessages
([http://schemas.microsoft.com/SQL/Notifications/PostQueryNotification]);
The next step is to let the SQL user subscribe to the query notifications. I understand I can provide the login user, which is sa (verified using the query SELECT * FROM sys.server_principals):
GRANT SUBSCRIBE QUERY NOTIFICATIONS TO sa;
But I am getting "Cannot find the user 'sa', because it does not exist or you do not have permission."
I have used other users like sysadmin too to grant the permission but every time I was getting the same error. Then I read # http://ask.sqlservercentral.com/questions/7803/msg-15151-level-16-state-1-line-1-cannot-find-the.html) that the permission needs to be provided to a user and not to a login which I did. So now I have provided the permission to 'public' and 'guest' and the sql query executes successfully and not to dbo ("Cannot grant, deny, or revoke permissions to sa, dbo, entity owner, information_schema, sys, or yourself.")
The application code in c# is not too complicated and I have followed the links provided at the beginning so not putting the code here (surely I changed the queue name etc. in line with the sql commands above). But the SQlDependency does not seem to be working when I change the table records (insert/delete).
Where am I going wrong? Is there any step which I am missing
Try:
use [master]
go
alter authorization on database::[YourDatabaseName] to [sa]
go
I'm coding a M2M data capture system using SQL Server 2012 and .net 4.5, the scenario is:
I have a remote data capture app, a web service, a DB.
The app captures data and invoke the web service to upload the data to the DB.
The web service call a "insert" storedproc to write raw data directly in Table A; and then, the web service returns a value telling that the insert was successful or not.
Now, a post-process storedproc needs to be run after the insert process to update another table (Table B).
Previously I used 'job agent' but since the required polling interval changed to 'less than 5 minutes', for the efficiency and real-time reason, I want to avoid to use the 'polling'.
Ideally, I want the app to be able to call the web service and get the return message/value, after that, the DB fires a stored proc to do the post-process work; the work may take longer so the app doesn't need to wait all the processes are done.
Can I fire the post-process sp from DB side? since the DB knows when the insert is done, and it saves communications from outside the DB.
Any suggestions?
You might think of using trigger plus Service Broker. In this way, the trigger will send a message to a queue. service broker will be fired to process the message. It decouples your table A update and table B update. If only use trigger to call table B, it will hold your table A update until the table B update finished.
I'm updating a current program that is working and in use on a Live environment, it saves Customers and Orders then exports them to an old database as well. All of the Reporting is still done in the old system while the reporting system in the new system is in development, which is why these all need to be exported.
This program has a built-in C# TransactionManager that is used to group multiple calls from C# to SQL within one transaction. Whenever I try to duplicate this I get errors and can't get it working.
Here's the code that is in place, working:
using (ITransactionInfo trx = this.TransactionManager.BeginTransaction())
{
//
// Update the customer. If the customer doesn't exist, then create a new one.
//
this.SaveCustomer(Order);
//
// Save the Order.
//
this.Store.SaveCorporateOrder(Order, ServiceContext.UserId);
//
// Save the Order notes and the customer notes.
//
this.NotesService.AppendNotes(NoteObjectTypes.CorporateOrder, Order.Id, Order.OrderNotes);
this.NotesService.AppendNotes(NoteObjectTypes.Customer, Order.Customer.Id, Order.CustomerNotes);
//
// Export the Order if it's new.
//
this.ExportOrder(Order, lastSavedVersion);
//
// Commit the transaction.
//
trx.Commit();
}
All of these functions just format the data and send parameters to Stored Procedures in the DB that perform the Select / Insert / Update operations on the DB.
The SaveCustomer stored procedure saves the customer to the new database.
The SaveCorporateOrder stored procedure gets information that was writen by the Save Customer stored procedure and uses it to save the Order to the new database.
The ExportOrder stored procedure gets information that was written by both of the previous ones and exports the Order to the old database.
Each of these stored procedures contain code that starts a new transaction if ##TRANCOUNT == 0 and have a commit statement at the end. It appears that none of these are being used because of the transaction in C#, but there is no code that passes transaction information or connection information to the stored procedures that I can see. This is working and in use on a SQL 2005 server.
When I try to build this and use it on my development environment that uses SQL 2008R2, I get errors like
"Uncommittable transaction is detected at the end of the batch"
and
"The server failed to resume the transaction"
It appears that each one is starting it's own transaction and is unable to read the data from the previous, uncommitted transaction instead of seeing that it is in the same transaction. I don't know if the different SQL version could be causing this to work differently or not, but the exact same code works in the Live install but not on my Dev environment.
Any ideas, or even direction where to look next would be wonderfull!
Thanks!
-Jacob
I think that the problem is because transaction fails and is not rejected. You don't have rollback call for the situation when any of SQL queries fail. Have you checked those queries?
I am trying to investigate a problem related to .NET ActiveRecord on SQL Server 2008.
We have a base repository class where we have methods defined for Saving and Updating entities, these methods generally call directly onto the ActiveRecordMediator.
We have a particular instance where if we call ActiveRecordMediator.SaveAndFlush on one of our entities and then try to execute a stored proc that reads from the table we just saved the sproc will hang.
Looking at SQL Server the table is locked thus why it cannot be read. So my questions are:
Why is my SaveAndFlush locking the table?
How can I ensure the locking doesn't occur?
This application is running as an ASP.NET web site so I assume it is maintaining sessions on a request basis, but I cannot be sure.
I believe I have figured out why this was occurring.
NHibernate when used in our environment will hold a transaction open for the entire request and then finally when the session is disposed will commit the transaction.
Our sproc was not using the same transaction as NHibernate thus why the locking occurred.
I have partially fixed the problem by wrapping my saving of the entity server side in a using
using(var ts = new TransactionScope(TransactionMode.New))
{
ActiveRecordMediator.SaveAndFlush(value);
ts.VoteCommit();
}
This way the entity will be saved and committed immediately.