One long connection or multiple short connections to the database? - c#

I'm currently writing a Windows Service that will run on a Windows Server 2008.
My colleague and I were discussing on one point particular. The connection to the database.
We both have a different way in mind and we would like to know your opinions on it.
Basicly the service starts a thread that sends a query to the database to check on rows that have a certain status (for example ST005). All rows with that status will be returned. The data we receive will be processed and the rows will be updated in the end.
So basicly we once execute a query and then for each row we execute an update. Multiple threads can be running at the same time. There is no problem with the coding, but it's the structure we don't really seem to agree on.
The classes we have are a controller, a DAO and a database class.
Way #1:
The controller creates a DAO class to process a query. That DAO class builds the sql statement with its parameters and then creates a database class which opens the connection, executes the query, returns the result set and then closes the connection.
This way there will be a new connection each time a query or update is requested.
Way #2:
The controller creates a database class (the database class is now provided with two new methods, connect() and disconnect(). Then the controller calls upon the connect() statement and creates a DAO class and provides the database class as parameter for the constructor of the DB class. The DAO class builds the sql statement with its parameters and then processes the data.
This way there is only one database class during the whole lifetime of the thread. During the whole thread the same connection is opened and only closed near the end of the threads lifetime.
What way is the best to use here? It seems like having multiple connections seems to be a bad practise, or are we wrong here? Any insight on this will be appreciated.
Regards,
Floris

Use a connection pool, as almost certainly is provided by your DBMS vendor, and let it figure out the best strategy. Ends the discussion.

Related

Service modification to per-session

I have a system wherein the already set up service for a specific process used to have a dingle instance mode. The service was used to run a long process that could be serve only one client. The architecture is as follows:
Now I am trying to make this wcf service per-session, so that it can run the long operation for two or more clients simultaneously. Since the process usually takes time. I am also sending the percentage of completion back to the client using a callback channel. This is what the architecture looks like the one shown below:
The major difference between the two architecture is:
Previously only one user could run the process for multiple
objects.Now each user can run the long process but for different
objects.
We have added callback facility to the new architecture
with per-session service.
We also plan on giving the user facility
to terminate the process,if he wishes to or the client connection is
closed.
But while trying to achieve the above we are facing the following issues.
The long time taking operation, occurs in database with the help of multiple stored procedures, called one by one from the static datamanager class.
Each SP is responsible for addition of around 500k rows in the multiple tables.
Though terminating the connection from client removes the instance of the service but since the database operations are done in the static class, the control gets stuck there and everything stops responding.
I know there is a DBCommand.Cancel() method which stops the operation associated with the DBCommand, but since the class is static cancelling that is also not possible.
Please suggest the architectural changes needed to solve this issue. I am ready to share more details.
From what I understand, you want multiple client at the same time and the static behavior that makes to have a singleton don't match together.
I would correct that.
Regards

c# wrapper class for MySQL, is it a good idea to share "MySqlConnection()" object between the methods?

I am writing c# wrapper class for MySQL and I need to know if I can put a "new MySqlConnection(connstr)" into the constructor and then use the same object between the methods that are going to manipulate the database? Should I create a new connection for every operation (select, insert, delete, update) and destroy it when done. Any ideas or better approaches to write a MySQL wrapper?
I'd recommend not sharing that connection.
It won't scale as well as getting that connection out from a pool when it's needed, performing the SQL operation, closing resources, and then returning the connection to the pool.
I consider it a best practice to restrict SQL objects to the data access layer and using them in the narrowest scope possible. Don't pass them around - the responsibility for cleaning them up isn't clear if you do.
Connections ought to be owned by the service that knows about use cases, units of work, and transactions. Let it check out the connection, make it available to all DAOs that need it, then commit or rollback and close the connection.
DAOs should be given database connections. They should not acquire them on their own, because they can never know if they're part of a larger transaction or not.
For database connection the rule should be Open as Late as possible and close as early as possible,.
So open the connection before executing the query and then close it, rather than sharing a single connection object across different methods.
There is a MySqlHelper class that should do all your connection pooling, opening, closing, disposing, etc for you. "These take a connection string as an argument, and they fully support connection pooling." That is only is you are using the Oracle provided MySQL connector though.

How do I queue database access?

I have a GUI where different parts of the information shown is extracted from a database. In order for the GUI not to freeze up I've tried putting the database queries in BackgroundWorkers. Because these access the database asynchronously I get an exception telling me the database connection is already open and used by another.
Is it possible to create a queue for database access?
I've looked into Task and ContinueWith, but since i code against .Net framework 3.5 this is not an option.
What is the DB engine you're using? Most modern databases are optimized for concurrent operations, so there's no need to queue anything.
The thing you're appaently doing wrong is reusing the same IDbConnection instance across different threads. Thats a no-no: each thread has to have its own instance.
I think your problem is in the way you get a connection to the database. If you want to fire separate queries you could use separate connections for separate requests. If you enable connection pooling this does not add a lot of overhead.
Try to use the pool objects. Plus as per your description your trying to open a connection on an unclosed connection object.

C# DataSource Class and Thread Safety

What would be a good way to writein C# a .NET3.5 Thread Safe DataSource Class. The class would connect to SQL Server and every method would execute a Stored Procedure.
When the code worked single threaded. I had a Singleton DataSource Class with a private SqlConnection member. Each method opened and closed that connection. When running this with multiple threads it is causing trouble when under certain scenarios the connection has already been opened by another thread. What would be the best way to re-write this class?
Note: By DataSource dont mean any built in .NET class, but a Model class that provides data to the controller.
Why don't you just use built-in ado.net pooling: create and open the connection right before the operation and dispose is asap - for every method.
The problem seems to be from the Singleton design. That can still work, just make sure you do not store the connection as a field. Only use a local variable in each method to hold the connection. That should be thread-safe by design.
And make it Exception-safe as well:
using (var conn = new SqlConnection(...))
{
// call SP
}
SQL Server already coordinates threads through very complex mechanisms. You don't need to do anything specific to achieve thread safety to simply execute a Store Procedure.
You'd have to elaborate more about what your DataSource class should do. You don't need to implement any thread safety code if each of your Create/Read/Update/Delete methods do not alter any state.
UPDATE: In your case, I recommend to just recreate a new SqlConnection instance in every method of your DataSource class, because ADO.NET already handles the pooling for you.

Simple query regarding WCF service

I have a WCF service which has two methods exposed:
Note: The wcf service and sql server is deployed in same machine.
Sql server has one table called employee which maintains employee information.
Read() This method retrieves all employees from sql server.
Write() This method writes (add,update,delete) employee info in employee table into sql server.
Now I have developed a desktop based application through which any client can query, add,update and delete employee information by consuming a web service.
Question:
How can I handle the scenario, if mulitple clients want update the employee information at the same time? Is the sql server itself handle this by using database locks ??
Please suggest me the best approach!!
Generally, in a disconnected environment optimistic concurrency with a rowversion/timestamp is the preferred approach. WCF does support distributed transactions, but that is a great way to introduce lengthy blocking into the system. Most ORM tools will support rowversion/timestamp out-of-the-box.
Of course, at the server you might want to use transactions (either connection-based or TransactionScope) to make individual repository methods "ACID", but I would try to avoid transactions on the wire as far as possible.
Re comments; sorry about that, I honestly didn't see those comments; sometimes stackoverflow doesn't make this easy if you get a lot of comments at once. There are two different concepts here; the waiting is a symptom of blocking, but if you have 100 clients updating the same record it is entirely appropriate to block during each transaction. To keep things simple: unless I can demonstrate a bottleneck (requiring extra work), I would start with a serializable transaction around the update operations (TransactionScope uses this by default). That way yes: you get appropriate blocking (ACID etc) for most scenarios.
However; the second issue is concurrency: if you get 100 updates for the same record, how do you know which to trust? Most systems will let the first update in, and discard the rest as they are operating on stale assumptions about the data. This is where the timestamp/rowversion come in. By enforcing "the timestamp/rowversion must match" on the UPDATE statement, you ensure that people can only update data that hasn't changed since they took their snapshot. For this purpose, it is common to keep the rowversion alongside any interesting data you are updating.
Another alternative is that you could instantiate the WCF service as a singleton (InstanceContext.Single) - which means there is only one instance of it running ever. Then, you could keep a simple object in memory for the purpose of update locking, and lock in your update method based on that object. When update calls come in from other sessions, they will have to wait until the lock is released.
Regards,
Steve

Categories

Resources