I want to implement the following interface on a 2-Tier application with an MS SQL-Server 2008R2 (i.e. no app server in between)
interface ILicense {
void Acquire(string license);
void Release(string license);
}
However, I want to release the license even if the application is killed or bombs out without calling the Release method. I also want to avoid using a timer which refreshes the license every minute or so.
So I thought: Use a dedicated SqlConnection together with the sp_getapplock and sp_releaseapplock SP because that's what they are seemed to be made for. Now I found out that the SP only work from within a transaction, so I would need to keep the transaction open all the time (i.e. while the application is running). Anyway, it works that way. The application starts, opens the connection, starts the transaction, and locks the license.
When the application terminates, the connection is closed, everything is rolled back and the license is released. Super.
Whenever the running app needs to switch licenses (e.g. for another module), it calls Release on the old license and then Acquire on the new one. Cool.
Now to my question(s):
Is it acceptable to have an open (uncommitted) transaction open on a separate connection for a long time?
Are there any better possibilities to implement such a 'lock' mechanism? The problem is that the license shall be released even if the application terminates unexpectedly. I thought of some sort of 'logout' trigger, but that does not exist in SQL-Server 2008R2
I am by no means the SQL or DB guru that some of the members of this site are but your setup brings up a few concerns or things to consider.
this could really limit the number of concurrent users that your application could have especially in a 2-tier architecture. Now in a 3 tier approach the app server would manage and pool these connections/transactions but then you would lose the ability to use those stored procs to implement your licensing mechanism, i believe.
with the transaction being open for some indeterminate period of time I would worry about the possibility of the tempdb growing too big or exceeding the space allocated to it. i don't know what is going on in the app and if there is anything else going on in that transaction, my guess is no but thought i would mention it.
I hope i am not getting my SQL versions mixed up here but transaction wraparound could cause the db to shutdown.
This limits your app significantly as the data in the transaction has a lock on it that won't be released until you commot or rollback.
There must be a more elegant way to implement a licensing model that doesn't rely on leaving a transaction open for the life of the app or app module. If you have a two tier app then that implies that the client always has some kind of connectivity so maybe generate some kind of unique id for the client and either add a call home method or if you really are set on there being an instantaneous verification then everytime the client performs an action that queries the db have it check to see if the client is properly licensed etc.
Lastly, in all of the SQL teachings I have received from other db guys who actually really know there stuff this kind of setup (long running open transaction) were never recommended unless there was a very specific need that could not be solved otherwise.
Related
I want to rename database file and even I use using with connection every time I have to call:
FirebirdSql.Data.FirebirdClient.FbConnection.ClearAllPools();
The problem is that this method doesn't block the thread and I don't know how to check if all connections are cleared, because if I get value from:
FirebirdSql.Data.FirebirdClient.FbConnection.ConnectionPoolsCount
It is zero immediately after the method, but I am still not able to rename the database file. If I set some timeout after the method (I tried 1s) then the file is not locked and I can rename it. The problem is that this timeout could be certainly different on different machines.
FWIK the only other method how to check if the file is not locked is to try the renaming in the loop with some timeout, but I can not be sure if the lock is made by connections from my application or from somewhere else.
So is there a better way, how I can wait until this method clears the connections?
Making it an answer for the sake of formatting lists.
#Artholl you can not safely rely upon your own disconnection for a bunch of reasons.
There may be other programs connected, not only this your running program. And unless you connect with SYSDBA or database creator or RDB$ADMIN role - you can not query if there are other connections now. However, you can query, from MON$ATTACHMENTS, the connections made with the same user as your CURRENT_CONNECTION. This might help you to check the state of your application's own pool. Just that there is little practical value in it.
in Firebird 3 in SuperServer mode there is the LINGER parameter, it means that server would keep the database open for some time after the last client disconnects, expecting that if some new client might decide to connect again then the PAGE CACHE for DB file is already in place. Like for middle-loaded WWW servers.
even in Firebird 2 every open database has some caches, and it would be installation-specific (firebird.conf) and database specific (gfix/gstat) how large the caches are. After the engine seeing all clients disconnected decided the database is to be closed - it starts with flushing the caches and demanding OS to flush their caches too ( there is no general hardware-independent way to demand RAID controllers and disks themselves to flush caches, or Firebird would try to make it too ). By default Firebird caches are small and preempting them to hardware layer should be fast, but still it is not instant.
Even if you checked that all other clients did disconnected, and then you disconnected yourself, and then you correctly guessed how long to wait for Linger and Caches, even then you still are not safe. You are subject to race conditions. At the very time you start doing something requiring explicit owning of DB there may happen some new client that would concurrently open his new connection.
So the correct approach would be not merely proving there is no database connection right NOW, but also ensuring there CAN NOT be any new connection in future, until you re-enable it.
So, as Mark said above, you have to use Shutdown methods to bring the database into no-connections-allowed state. And after you've done with file renaming and other manipulations - to switch it back to normal mode.
https://www.firebirdsql.org/file/documentation/reference_manuals/user_manuals/html/gfix-dbstartstop.html
If I was responsible for maintaining the firebird provider, I wouldn't want users to rely on such functionality.
Other applications could have the file open (you're only in control of connection pools in the current AppDomain), and the server might be running some kind of maintenance on the database.
So even if you can wait for the pools to be cleared, I'd argue that if you really really have to mess with these files, a more robust solution is to stop the firebird service instead (and wait for it to have fully stopped).
I am using the latest version of Entity Framework on my application (but I don't think EF is the issue here, just stating what ORM we are using) and have this multi-tenant architecture. I was doing some stress tests, built in C#, wherein it creates X-number of tasks that runs in parallel to do some stuff. At some point at the beginning of the whole process, it will create a new database for each task (each tenant in this case) and then continues to process the bulk of the operation. But on some tasks, it throws 2 SQL Exceptions on that exact part of my code where it tries to create a new database.
Exception #1:
Could not obtain exclusive lock on database 'model'. Retry the
operation later. CREATE DATABASE failed. Some file names listed could
not be created. Check related errors.
Exception #2:
Timeout expired. The timeout period elapsed prior to completion of
the operation or the server is not responding.
It's either of those two and throws on the same line of my code (when EF creates the database). Apparently in SQL Server, when creating a database it does it one at a time and locks the 'model' database (see here) thus some tasks that are waiting throws a timeout or that lock on 'model' error.
Those tests were done on our development SQL Server 2014 instance (12.0.4213) and if I execute, say, 100 parallel tasks there will bound to be an error thrown on some tasks or sometimes even nearly half the tasks I executed.
BUT here's the most disturbing part in all these, when testing it on my other SQL server instance (12.0.2000), which I have installed locally on my PC, no such error throws and completely finishes all the tasks I executed (even 1000 tasks in parallel!).
Solutions I've tried so far but didn't work:
Changed the timeout of the Object context in EF to infinite
Tried adding a longer or infinite timeout on the connection string
Tried adding a Retry strategy on EF and made it longer and run more often
Currently, trying to install Virtual machine with a similar environment to our Dev server (uses Windows Server 2014 R2) and test on specific version of SQL Server to try to see if the versions have anything to do with it (yeah, I'm that desperate :))
Anyway, here is a simple C# console application you can download and try to replicate the issue. This test app will execute N-number of tasks you input and simply creates a database and does cleanup right afterwards.
2 observations:
Since the underlying issue has something to do with concurrency, and access to a "resource" which at a key point only allows a single, but not a concurrent, accessor, it's unsurprising that you might be getting differing results on two different machines when executing highly concurrent scenarios under load. Further, SQL Server Engine differences might be involved. All of this is just par for the course for trying to figure out and debug concurrency issues, especially with an engine involved that has its own very strong notions of concurrency.
Rather than going against the grain of the situation by trying to make something work or fully explain a situation, when things are empirically not working, why not change approach by designing for cleaner handling of the problem?
One option: acknowledge the reality of SQL Server's need to have a exclusive lock on model db by regulating access via some kind of concurrency synchronization mechanism--a System.Threading.Monitor sounds about right for what is happening here and it would allow you to control what happens when there is a timeout, with a timeout of your choosing. This will help prevent the kind of locked up type scenario that may be happening on the SQL Server end, which would be an explanation for the current "timeouts" symptom (although stress load might be the sole explanation).
Another option: See if you can design in such a way that you don't need to synchronize at all. Get to a point where you never request more than one database create simultaneously. Some kind of queue of the create requests--and the queue is guaranteed to be serviced by, say, only one thread--with requesting tasks doing async/await patterns on the result of the creates.
Either way, you are going to have situations where this slows down to a crawl under stress testing, with super stressed loads causing failure. The key questions are:
Can your design handle some multiple of the likely worst case load and still show acceptable performance?
If failure does occur, is your response to the failure "controlled" in a way that you have designed for.
Probably you have different LockTimeoutSeconds and QueryTimeoutSeconds set on the development and local instances for SSDT (DacFx Deploy), which is deploying the databases.
For example LockTimeoutSeconds is used to set lock_timeout. If you have a small number here, this is the reason for
Could not obtain exclusive lock on database 'model'. Retry the operation later. CREATE DATABASE failed. Some file names listed could not be created. Check related errors.
You can use the query below to identify what timeout is set by SSDT
select session_id, lock_timeout, * from sys.dm_exec_sessions where login_name = 'username'
To increase the default timeout, find the identifier of the user, which is deploying the database here
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\ProfileList
Then find the following registry key
HKEY_USERS\your user identifier\Microsoft\VisualStudio\your version\SQLDB\Database
and change the values for LockTimeoutSeconds and QueryTimeoutSeconds
we are building a WinForms desktop application which talks to an SQL Server through NHibernate. After extensive research we settled on the Session / Form strategy using Ninject to inject a new ISession into each Form (or the backing controller to be precise). So far it is working decently.
Unfortunately the main Form holds a lot of data (mostly read-only) which gets stale after some time. To prevent this we implemented a background service (really just a seperate class) which polls the DB for changes and issues an event which lets the main form selectively update the changed rows.
This background service also gets a separate session to minimize interference with the other forms. Our understanding was that it is possible to open a transaction per session in parallel as long as they are not nested.
Sadly this doesn't seem to be the case and we either get an ObjectDisposedException in one of the forms or the service (because the service session used an existing transaction from on of the forms and committed it, which fails the commit in the form or the other way round) or we get an InvalidOperationException stating that "Parallel transactions are not supported by SQL Server".
Is there really no way to open more than one transaction in parallel (across separate sessions)?
And alternatively is there a better way to update stale data in a long running form?
Thanks in advance!
I'm pretty sure you have messed something up, and are sharing either session or connection instances in ways you did not intend.
It can depend a bit on which sort of transactions you use:
If you use only NHibernate transactions (session.BeginTransaction()), each session acts independently. Unless you do something special to insert your own underlying database connections (and made an error there), each session will have their own connection and transaction.
If you use TransactionScope from System.Transactions in addition to the NHibernate transactions, you need to be careful about thread handling and the TransactionScopeOption. Otherwise different parts of your code may unexpectedly share the same transaction if a single thread runs through both parts and you haven't used TransactionScopeOption.RequiresNew.
Perhaps you are not properly disposing your transactions (and sessions)?
Can any one tell what is the disadvantage of Using
MicrosoftApplicationsDataBlock.dll(SQLHelper class).
How can we sustain the maximum connection requests in a .net application
If we have lakhs of requests at a time then
is it ok to use
MicrosoftApplicationsDataBlock.dll(SQLHelper class).
More "modern" dataaccess libraries are generally preferable, they provide better performance, flexibility and usability. I would generally avoid the old SQLHelper class if possible. :) I worked on an old project where a dependency on the SQLHelper class kept us from upgrading from .NET 1.1 to .NET 4.
For awesome performance, you may want to take a look at Dapper, it's used here at Stackoverflow and is very fast and easy to use.
But if you're looking at 100k simultaneous requests (second, minute, day??) you probably want to avoid the database altogether. Look at caching, either ASP.NETs own built-in or maybe something like the Windows Server AppFabric Cache.
Disadvanatges of SQlHelper -> Dont think there is any . You get a lot off code for free to open and close connection , transaction handling etc... nothing that you cannot write yourself but number of connections that you can send from your app is not a factor of the SQLhelper or any other DbHelper you use. In any scenario you call system.data.sqlclient which is an API to connect and work with sqlserver...
When you launch N connections they all go the SQL Server Scheduler services. If all the CPUs are busy working on available SPIDs(processes) the new ones go in queue. You can see then usign sp_who2 , or select * from sys.sysprocesses.
The waiting SPIDs are offered CPU cycles at intervals (based on some kind of algo that I am not sure of) . This is called SOS Scheduler Yeild where one process yeilds the scheduler to other... Now this will fine till you dont reach maximum concurrent connections that server can hold. For different version of SQL Server (developer/enterprise etc) this is different. When you reach this MAX no of concurrent connections the SQL Server as no more threads left in its thread pool to allow your app to get new connections.. in these scenario you will get SQL.Exception of connection timed out...
Long story short you can open as many connection and keep them opened as long as you want using sqlhelper or traditional connection.open. In good practice you should open a connection , do an atomic transaction , close the connection and dont open too many connections coz your box (sql) will run out of connection handles to provide to your app..
SQL helper is just a helper , best practices of ADO.NET programming still applies no matter your use it or dont..
I have a table with a heavy load(many inserts/updates/deletes) in a SQL2005 database. I'd like to do some post processing for all these changes in as close to real time as possible(asynchronously so as not to lock the table in any way). I've looked a number of possible solutions but just can't seem to find that one neat solution that feels right.
The kind of post processing is fairly heavy as well, so much so that the windows listener service is actually going to pass the processing over to a number of machines. However this part of the application is already up and running, completetly asynchronous, and not what I need help with - I just wanted to mention this simply because it affects the design decision in that we couldn't just load up some CLR object in the DB to complete the processing.
So, The simple problem remains: data changes in a table, I want to do some processing in c# code on a remote server.
At present we've come up with using a sql trigger, which executes "xp_cmdshell" to lauch an exe which raises an event which the windows service is listening for. This just feels bad.
However, other solutions I've looked at online feel rather convoluted too. For instance setting up SQLCacheDependancy also involves having to setup Service broker. Another possible solution is to use a CLR trigger, which can call a webservice, but this has so many warnings online about it being a bad way to go about it, especially when performance is critical.
Idealy we wouldn't depnd on the table changes but would rather intercept the call inside our application and notify the service from there, unfortunately though we have some legacy applications making changes to the data too, and monitoring the table is the only centralised place at the moment.
Any help would be most appreciated.
Summary:
Need to respond to table data changes in real time
Performance is critical
High volume of traffic is expected
Polling and scheduled tasks are not an option(or real time)
Implementing service broker too big (but might be only solution?)
CLR code is not yet ruled out, but needs to be perfomant if suggested
Listener / monitor may be remote machine(likely to be same phyisical network)
You really don't have that many ways to detect changes in SQL 2005. You already listed most of them.
Query Notifications. This is the technology that powers SqlDependency and its derivatives, you can read more details on The Mysterious Notification. But QN is designed to invalidate results, not to pro-actively notify change content. You will only know that the table has changes, without knowing what changed. On a busy system this will not work, as the notifications will come pretty much continously.
Log reading. This is what transactional replication uses and is the least intrusive way to detect changes. Unfortunately is only available to internal components. Even if you manage to understand the log format, the problem is that you need support from the engine to mark the log as 'in use' until you read it, or it may be overwritten. Only transactional replication can do this sort of special marking.
Data compare. Rely on timestamp columns to detect changes. Is also pull based, quite intrussive and has problems detecting deletes.
Application Layer. This is the best option in theory, unless there are changes occuring to the data outside the scope of the application, in which case it crumbles. In practice there are always changes occuring outside the scope of the application.
Triggers. Ultimately, this is the only viable option. All change mechanisms based on triggers work the same way, they queue up the change notification to a component that monitors the queue.
There are always suggestions to do a tightly coupled, synchronous notification (via xp_cmdshell, xp_olecreate, CLR, notify with WCF, you name it), but all these schemes fail in practice because they are fundamentally flawed:
- they do not account for transaction consistency and rollbacks
- they introduce availability dependencies (the OLTP system cannot proceed unless the notified component is online)
- they perform horribly as each DML operation has to wait for an RPC call of some form to complete
If the triggers do not actually actively notify the listeners, but only queue up the notifications, there is a problem in monitoring the notifications queue (when I say 'queue', I mean any table that acts as a queue). Monitoring implies pulling for new entries in the queue, which means balancing the frequency of checks correctly with the load of changes, and reacting to load spikes. This is not trivial at all, actually is very difficult. However, there is one statement in SQL server that has the semantics to block, without pulling, until changes become available: WAITFOR(RECEIVE). That means Service Broker. You mentioned SSB several times in your post, but you are, rightfuly so, scared of deploying it because of the big unknown. But the reality is that it is, by far, the best fit for the task you described.
You do not have to deploy a full SSB architecture, where the notificaition is delivered all the way to the remote service (that would require a remote SQL instance anyway, even an Express one). All you need to accomplice is to decouple the moment when the change is detected (the DML trigger) from the moment when the notification is delivered (after the change is commited). For this all you need is a local SSB queue and service. In the trigger you SEND a change notification to the local service. After the original DML transaction commits, the service procedure activates and delivers the notification, using CLR for instance. You can see an example of something similar to this at Asynchronous T-SQL.
If you go down that path there are some tricks you'll need to learn to achieve high troughput and you must understant the concept of ordered delivery of messages in SSB. I reommend you read these links:
Reusing Conversations
Writing Service Broker Procedures
SQL Connections 2007 Demo
About the means to detect changes, SQL 2008 apparently adds new options: Change Data Capture and Change Tracking. I emphasizes 'apparently', since they are not really new technologies. CDC uses log reader and is based on the existing Transactional replication mechanisms. CT uses triggers and is very similar to existing Merge replication mechanisms. They are both intended for occasionally connected systems that need to sync up and hence not appropiate for real-time change notification. They can populate the change tables, but you are left with the task to monitor these tables for changes, which is exactly from where you started.
This could be done in many ways. below method is simple since you dont want to use CLR triggers and sqlcmd options.
Instead of using CLR triggers you can create the normal insert trigger which updates the dedicated tracking table on each insert.
And develop dedicated window service which actively polls on the tracking table and update the remote service if there is any change in the data and set the status in tracking table to done (so it wont be picked again)..
EDIT:
I think Microsoft sync services for ADO.Net can work for you. Check out the below links. It may help you
How to: Use SQL Server Change Tracking - sql server 2008
Use a Custom Change Tracking System - below sql server 2008
In similar circumstances we are using CLR trigger that is writing messages to the queue (MSMQ). Service written in C# is monitoring the queue and doing post-processing.
In our case it is all done on the same server, but you can send those messages directly to the remote queue, on a different machine, totally bypassing "local listener".
The code called from trigger looks like this:
public static void SendMsmqMessage(string queueName, string data)
{
//Define the queue path based on the input parameter.
string QueuePath = String.Format(".\\private$\\{0}", queueName);
try
{
if (!MessageQueue.Exists(QueuePath))
MessageQueue.Create(QueuePath);
//Open the queue with the Send access mode
MessageQueue MSMQueue = new MessageQueue(QueuePath, QueueAccessMode.Send);
//Define the queue message formatting and create message
BinaryMessageFormatter MessageFormatter = new BinaryMessageFormatter();
Message MSMQMessage = new Message(data, MessageFormatter);
MSMQueue.Send(MSMQMessage);
}
catch (Exception x)
{
// async logging: gotta return from the trigger ASAP
System.Threading.ThreadPool.QueueUserWorkItem(new WaitCallback(LogException), x);
}
}
Since you said there're many inserts running on that table, a batch processing could fit better.
Why did just create a scheduled job, which handle new data identified by a flag column, and process data in large chunks?
Use the typical trigger to fire a CLR on the database. This CLR will only start a program remotely using the Win32_Process Class:
http://motevich.blogspot.com/2007/11/execute-program-on-remote-computer.html