Connection object - c#

Can any one tell what is the disadvantage of Using
MicrosoftApplicationsDataBlock.dll(SQLHelper class).
How can we sustain the maximum connection requests in a .net application
If we have lakhs of requests at a time then
is it ok to use
MicrosoftApplicationsDataBlock.dll(SQLHelper class).

More "modern" dataaccess libraries are generally preferable, they provide better performance, flexibility and usability. I would generally avoid the old SQLHelper class if possible. :) I worked on an old project where a dependency on the SQLHelper class kept us from upgrading from .NET 1.1 to .NET 4.
For awesome performance, you may want to take a look at Dapper, it's used here at Stackoverflow and is very fast and easy to use.
But if you're looking at 100k simultaneous requests (second, minute, day??) you probably want to avoid the database altogether. Look at caching, either ASP.NETs own built-in or maybe something like the Windows Server AppFabric Cache.

Disadvanatges of SQlHelper -> Dont think there is any . You get a lot off code for free to open and close connection , transaction handling etc... nothing that you cannot write yourself but number of connections that you can send from your app is not a factor of the SQLhelper or any other DbHelper you use. In any scenario you call system.data.sqlclient which is an API to connect and work with sqlserver...
When you launch N connections they all go the SQL Server Scheduler services. If all the CPUs are busy working on available SPIDs(processes) the new ones go in queue. You can see then usign sp_who2 , or select * from sys.sysprocesses.
The waiting SPIDs are offered CPU cycles at intervals (based on some kind of algo that I am not sure of) . This is called SOS Scheduler Yeild where one process yeilds the scheduler to other... Now this will fine till you dont reach maximum concurrent connections that server can hold. For different version of SQL Server (developer/enterprise etc) this is different. When you reach this MAX no of concurrent connections the SQL Server as no more threads left in its thread pool to allow your app to get new connections.. in these scenario you will get SQL.Exception of connection timed out...
Long story short you can open as many connection and keep them opened as long as you want using sqlhelper or traditional connection.open. In good practice you should open a connection , do an atomic transaction , close the connection and dont open too many connections coz your box (sql) will run out of connection handles to provide to your app..
SQL helper is just a helper , best practices of ADO.NET programming still applies no matter your use it or dont..

Related

Create a 'Licensing' feature with SQL-Server

I want to implement the following interface on a 2-Tier application with an MS SQL-Server 2008R2 (i.e. no app server in between)
interface ILicense {
void Acquire(string license);
void Release(string license);
}
However, I want to release the license even if the application is killed or bombs out without calling the Release method. I also want to avoid using a timer which refreshes the license every minute or so.
So I thought: Use a dedicated SqlConnection together with the sp_getapplock and sp_releaseapplock SP because that's what they are seemed to be made for. Now I found out that the SP only work from within a transaction, so I would need to keep the transaction open all the time (i.e. while the application is running). Anyway, it works that way. The application starts, opens the connection, starts the transaction, and locks the license.
When the application terminates, the connection is closed, everything is rolled back and the license is released. Super.
Whenever the running app needs to switch licenses (e.g. for another module), it calls Release on the old license and then Acquire on the new one. Cool.
Now to my question(s):
Is it acceptable to have an open (uncommitted) transaction open on a separate connection for a long time?
Are there any better possibilities to implement such a 'lock' mechanism? The problem is that the license shall be released even if the application terminates unexpectedly. I thought of some sort of 'logout' trigger, but that does not exist in SQL-Server 2008R2
I am by no means the SQL or DB guru that some of the members of this site are but your setup brings up a few concerns or things to consider.
this could really limit the number of concurrent users that your application could have especially in a 2-tier architecture. Now in a 3 tier approach the app server would manage and pool these connections/transactions but then you would lose the ability to use those stored procs to implement your licensing mechanism, i believe.
with the transaction being open for some indeterminate period of time I would worry about the possibility of the tempdb growing too big or exceeding the space allocated to it. i don't know what is going on in the app and if there is anything else going on in that transaction, my guess is no but thought i would mention it.
I hope i am not getting my SQL versions mixed up here but transaction wraparound could cause the db to shutdown.
This limits your app significantly as the data in the transaction has a lock on it that won't be released until you commot or rollback.
There must be a more elegant way to implement a licensing model that doesn't rely on leaving a transaction open for the life of the app or app module. If you have a two tier app then that implies that the client always has some kind of connectivity so maybe generate some kind of unique id for the client and either add a call home method or if you really are set on there being an instantaneous verification then everytime the client performs an action that queries the db have it check to see if the client is properly licensed etc.
Lastly, in all of the SQL teachings I have received from other db guys who actually really know there stuff this kind of setup (long running open transaction) were never recommended unless there was a very specific need that could not be solved otherwise.

Closing connection to DB

i have a question about closing connection in C#. Company has an application where data flows automatically online from app to DB. I would like to create my own ASP + C# application which will use select from data (DB table which is filled from company app) as source for independent report. My question: can closing of the connection in my app have influence on the second(company, very important app?) - record will miss in db due to close connection? or any other problems?
No, everything will be safe if you close it properly. I recommend you to use using construction always. It will be transformed into try-catch-finally and close resources automatically.
That totally depends on your use-case, if you open and leave open hundreds and hundreds if not thousands and thousands of empty connections, the SQL Server will slowly begin to have some performance degradation.
Think of it as you asking your boss for something, and you say, "Boss-man, I need to ask you a question." But you remind him hundreds and thousands of times a second, "I need to ask you a question." Anything else he tries to do will slowly begin to lose performance, as he has to process the fact that you are going to have to ask him a question. Similarly with Sql Server. Mind you, at this point you haven't actually asked your question yet.
If your DBMS is Microsoft SQL Server, see this article: https://msdn.microsoft.com/en-us/library/ms187030.aspx
SQL Server allows a maximum of 32,767 user connections.
If you open 32k connections to the server, two things will likely happen:
Your DBA will come to you and say "wtf mate?" by the time you get close. A likely argument will ensue in which case you and the DBA will probably end up yelling and creating a scene.
Your DBMS will reach the maximum connection limit and the other all will crap out.
Not saying that any of this will happen, that requires you to open 32,767 concurrent connections, but it just goes to further prove that you should open/close as required. Also, if your Application uses a pool of connections and you open n connections, and the pool limit (separate from SQL Server - mind you) is n, you just stopped your app from opening more.
Generally speaking, you should open your connections as late as possible, and close them as early as possible.

C# SQL Server Coonnectivity - Constant Connection (client - server

I have written many desktop applications and all have gone great using a mysql connection to a database and used sql to query the database. I now want to start a larger project and it feels "wrong" to make many many database connections when I could connect to the server as client - server relationship and just query without having to keep opening and closing connections.
I have done a fair bit of digging around on google but to o avail. I think it's a case of I know what want to search, but not what to search for.
Any gentle nudge in the right direction would be greatly appreciated!
Generally it is accepted best practice to open a database connection, perform some actions and then close the connection.
It is best not to worry about the efficiencies of using lots of connections in this fashion. SqlServer deals with this quite nicely using connection pooling http://msdn.microsoft.com/en-us/library/8xx3tyca(v=vs.110).aspx
If you decide to keep connections open throughout the use of your application you run the risk of having lots of idle connections sitting around which is more "wrong" than opening and closing.
Obviously there are exceptions to these rules (such as if you find yourself opening 100s of connections in very short succession)... but this is general advice.

Force simultaneous threads/tasks for C# load testing app?

Question:
Is there a way to force the Task Parallel Library to run multiple tasks simultaneously? Even if it means making the whole process run slower with all the added context switching on each core?
Background:
I'm fairly new to multithreading, so I could use some assistance. My initial research hasn't turned up much, but I also doubt I know what exactly to search for. Perhaps someone more experienced with multithreading can help me better understand TPL and/or find a better solution.
Our company is planning on deploying a piece of software to all users' machines that will connect to a central server a few times a day, and synchronize some files and MS Access data back to the user's machine. We would like to load-test this concept first and see how the Access DB holds up to lots of simultaneous connections.
I've been tasked with writing a .NET application that behaves like the client app (connecting & syncing with a network location), but does this on multiple threads simultaneously.
I've been getting familiar with the Task Parallel Library (TPL), as this seems like the best (newest) way to handle multithreading, and get return values back from each thread easily. However as I understand it, TPL decides how to run each "task" for the fastest execution possible, splitting the work among the available cores. So lets say I want to run 30 sync jobs on a 2-core machine... the TPL would run 15 on each core, sequentially. This would mean my load test would only be hitting the Access DB with at most 2 connections at the same time. I want to hit the database with lots of simultaneous connections.
You can force the TPL to do this by specifying TaskOptions.LongRunning. According to Reflector (not according to the docs, though) this always creates a new thread. I consider relying on this safe production use.
Normal tasks will not do, because they don't guarantee execution. Setting MinThreads is a horrible solution (for production) because you are changing a process global setting to solve a local problem. And still, you are not guaranteed success.
Of course, you can also start threads. Tasks are more convenient though because of error handling. Nothing wrong with using threads for this use case.
Based on your comment, I think you should reconsider using Access in the first place. It doesn't scale well and has problems once the database grows to a certain size. Especially if this is simply served off some file share on your network.
You can try and simulate load from your single machine but I don't think that would be very representative of what you are trying to accomplish.
Have you considered using SQL Server Express? It's basically a de-tuned version of the full-blown SQL Server which might suit your needs better.

Having lots of open connections vs repeatedly opening and closing them

I'm investigating some performance issues in our product. We have several threads (order of tens) that are each doing some form of polling every couple of seconds, and every time we do that we are opening a connection to the DB, querying, then disposing.
So I'm wondering, would it be more efficient for each thread to keep an open connection to the DB so we don't have to go through the overhead of opening a connection every 2 seconds? Or is it better to stick with the textbook using blocks?
First thing to learn about is Connection pooling. You're already using it, don't change your code.
The question becomes: how many connections to claim in my config file?
And that's easy to change and measure.
As mentioned, connection pooling should take care of it but if you are beating on the database with messaging or something like that to check on the status of things every few seconds then you could be filling up the database pool very quickly. If you are on SQL Server, do an SP_WHO2 on the database in a query window and you'll see a lot of information: number of spids (connections open), blocking, etc.
In general, connection setup and teardown is expensive; doing this multiple times in tens of threads might be crippling; note however that the solution you use might already be pooling connections for you (even if you're not aware of it), so this may not be necessary (see your manuals and configuration).
On the other hand, if you decide to implement some sort of connection pooling by yourself, check that your DB server could handle tens of extra connections (it probably should).

Categories

Resources