Connection Pooling in a small application - c#

I've got a few simple questions about connection pooling and best practices.
I'm planning and writing a small application which relies on a MySQL database. In this application I use modules and plug-ins which can create connections. The application has direct access to the MySQL database and it will probably be the only client connecting to the database.
Here are my first questions: Will connection pooling make sense? Is it irrelevant or should I disable it? What are your experiences?
On the other hand in my company we develop another software which has one MySQL database server and many clients. Every client can open multiple windows in which multiple connections can be active. There is a good chance that this software will be using the basic concept of my new application. The clients connect directly with the database. So I guess it would make a lot of sense to write a server application which handles the pooling and organizes the connections, am I right? How much sense would it make to let every client use it's own connection pool? We're talking about 1-50 clients with 1-10 connections.
Do you think it's the best to write a small server application to handle the connection pooling?
I'm asking because I don't really know when connection pooling makes sense and when not and how to handle it with small and medium sized client applications. I'm looking for some input of your experiences. :) I hope the questions is not to awkward. ^^
Greetings,
Simon
P.S.: It's a windows based application. Not a web service.

Connection pooling will give you extra performance, actually without it performance may be an issue even for a small application (it depends on the number of calls, data, etc).
Consider properly handling your connections in order to avoid 'max pool size reached' errors and timeouts. A good practice is to handle your connection like this:
using (SqlConnection conn = new SqlConnection(myConnectionString))
{
conn.Open();
doSomething(conn);
}
using guaranties than the connection will be properly closed/disposed. Check this article that provides some tips that can be applied either for MSSQL or MySQL.
Consider also the use of stored procedures. Hope this help you getting started.

Related

SQL Connection Over Internet: good practice?

I'm working with a WPF/C# desktop application, and I need to connect with the database server over the internet to a static IP address, but in some cases, an sql connection exception occur because the bad internet ping and latency.
I also know some of the security risks that can be made from this. I'm already parametrized all queries and connecting with encryption.
So I start to think: this is a good practice? And what can I do to increase security and performance?
No, exposing your SQL server over the internet is not a good idea. That doesn't stop you from doing it, but: I wouldn't recommend it, unless the data is freely available and public domain (so it doesn't matter if someone gets access to more than you expected), and is trivial to replace (so it doesn't matter if it gets damaged). And even then I'd probably suggest using a service tier and keeping your database server strictly on the "inside".
Re encryption: that prevents intermediaries from snooping, but it doesn't limit what the user can do. A malicious user with genuine access could simply connect up and do whatever they want, bypassing whatever rules and filters you have in place.

Best way to sync remote SQL Server database with local SQL Server Compact database?

I realise this is a much discussed topic but all the suggestions I see seem to involve direct access to the SQL Server which in our instance is not ideal.
Our scenario is a remote SQL Server database with (say) 100 tables. We are developing a lightweight desktop application which will use an SQL Server Compact database and sync a subset of (say) 20 tables with the remote server periodically.
I would like to have control over how the replication occurs because speed is a major issue for us since the remote server is 1000's of miles away.
Also I don't need to sync all the records in each table - only those relevant to each user.
I quite like the SQL Merge facility however it requires that the client be connected to the remote SQL Server. This is currently not possible and we were thinking of interfacing to the remote server through a web service accessed through our application or some other method.
Any suggestions welcome.
UPDATE
Just to clarify, internet connection will be intermittent, that's the main reason why we need to sync the two databases.
The fact that you are using a compact db for the client puts some pretty heavy limitations on you for available options in this scenario.
Given the limitations and the performance requirements you desire, you could consider implementing a service-based http endpoint to keep the desired tables in sync. If your architecture allows for it, doing so asynchronously would boost performance significantly but again, it may not even be viable depending on your architecture.
Something else to consider is using web sockets rather than standard http connections for a web service like mentioned above. That way you could keep the clients synced real-time, since web sockets are true fully duplex real-time connections. The major catch with this is you would either have to ensure all clients are web-socket compliant or provide a fall-back to emulate a websocket connection with an emulation framework for clients that aren't up to par.
Not sure if this helps any.
You have the choice of both Sync Framework (requires more coding and has some other limitations) or Merge Replication, both work over http/https) - see this blog post for a comparision: http://blogs.msdn.com/b/sqlservercompact/archive/2009/11/09/merge-replication-vs-sync-services-for-compact.aspx
Can you not use the MS Sync framework?
Pretty much designed for your scenario AFAIK.
Quick google turned this tutorial up...
http://social.technet.microsoft.com/wiki/contents/articles/2190.tutorial-synchronizing-sql-server-and-sql-server-compact-sync-framework.aspx

Why disconnect from a database?

Background info: I'm coding with C#, using Microsoft SQL Server for databases.
I didn't find much on Google on the subject, so I'm asking here: should I always close a connection to my database after performing a query?
I'm torn between two solutions (maybe better ones exist...):
either open the connection before querying, then close it right after the SQL query
or open the connection at the start of my application, and before each SQL query check if the connection is still up and reopen it if needed.
In the past, I used the first solution but I discovered that opening a new connection can take quite some time (especially over a VPN connection to my LAN opened through 3G), and that it would slow down my application. That's why I decided to go with the second solution (in that case, my connection should be always up if we forget about time-out) and noticed some better performances.
Do I need to close the connection at the end of my application or can I forget about it?
Yes, you should close your connection after each SQL query. The database connection pool will handle the physical network connection, and keep it open for you. You say that you found that opening a connection can take some time - did you find that the application was really doing that multiple times?
(I hope your real application won't be talking directly to the database over 3G, btw... presumably this is just for development purposes...)
One important thing to remember is that there is a unique connection pool for each unique connection string you use... so always use the same connection string unless you need to connect to a different database (or have unique requirements).
Here is a good document on connection pooling with System.Data.SqlClient.SqlConnection.
This will heavily depend on how many clients you anticipate will need to connect to the database. Leaving the connection open, could prevent another user from accessing the DB while they wait for an open connection.

Sql connection in a windows service

I have written a Windows Service which listens for data from a third party service, holds it in memory for a short time and periodically all the new data is flushed to the database.
I was initially opening a new connection each time I needed to flush the data and closing it again afterwards. (Every 5 seconds or so)
As the server seems to be getting hammered I have changed that so there is a single connection opened and reused for the life of the application.
Just wondering if this is a bad idea?
I usually do web stuff where the connection is open and closed over the life of a single request. What is the best practice for a windows service that needs to do the sort of operation I have described?
I was going to make a fault tolerant connection like this:
private SqlConnection _sqlConnection;
public SqlConnection SqlConnection
{
get
{
if (_sqlConnection == null || !_sqlConnection.State.Equals(ConnectionState.Open))
{
var conn = new SqlConnection(_connectionString);
conn.Open();
return conn;
}
return _sqlConnection;
}
}
so if some reason the existing connection is closed or faulted in some way we would get a new open one
is that bad design for any reason?
If you are the single user of the database, hold onto the connection. If not you can really rely on connection pooling to do that for you.
I personally would go for opening the connection everytime. In .NET 2.0 a new feature was implemented so that if you have an open connection to a sql server and sql server gets restarted, etc... your connection becomes invalid and that is not something I can risk my service with. See my post from some years ago.
Call me conservative but I still think that leaving it up to the connection pool to manage the physical connections to the database is a better choice. So just open and close the connection normally, and leave to the pool to decide what to do. I've done that in web services without any problems, and you will have more connections available to handle the load.
I would not try to maintain an open connection. There will be lots of edge cases where the connection will be become unusable and your code for managing the connection and making sure the old duff connection is correctly disposed would have to be absolutely bullet-proof.
I recommend the more common connection use pattern of open, use, close/dispose. The code will be much easier to write and maintain. Be absolutely sure you are disposing of all command and connection objects once you're done with them. Monitor your app with a profiling tool, and keep a check on the number of open database connections at the server to make sure your code is working the way you intended.
How often you need to dump the data into the database (and therefore open/use/close database connections) depends on a number of factors such as how much data will be in-memory before being dumped, the capability of the database server to consume the data, and the risk of losing data if you've accepted it from the web service, but haven't written it to the database and your service or the server crashes.
If your data is precious, you might want to consider having two processes. One process calls the web service and stores the received data securely in a message queue. Another process reads the messages from the queue and puts the data in the message in the database.
This way of handling this process means you can receive data whilst the database is temporarily down, and all the data will eventually be stored in the database.
Whilst this is a solid solution, it could just as easily be considered overkill, depending on your requirements.

What is the preferred way to handle this TCP connection in C#?

I have a server application (singleton, simple .NET console application) that talks to a GlobalCache GC-100-12 for the purpose of routing IR commands. Various .NET WinForm clients on the local network connect to my server application and send ASCII commands to it. The server application queues these ASCII commands and then sends them to the GC-100-12 via a TCP connection.
My question is, what is the best way to handle this connection from the server's point of view? I can think of two ways:
Create and Open a new TcpClient for each individual request. Close the TcpClient when the request is done.
Create and Open one TcpClient when the server starts and use a keep-alive (if necessary) to keep the connection open for the lifetime of the server object.
I ask this question because I wonder about the overhead of creating a new TcpClient for each request. Is it an expensive operation? Is this a bad practice?
Currently I am doing #1, and printing the results of each transmission to the console. Occasionally some connections timeout and the command doesn't get routed, and I was wondering if that was because of the overhead of creating a new TcpConnection each time, or if it is due to something else.
I can see #2 being more complicated because if the connection does drop it has to be recreated, and that will require a bit more code to handle that circumstance.
I'm looking for any general advice on this. I don't have a lot of experience working with the TcpClient class.
We had a simillar case of opening a telnet session to an old PICK based system. We found that the cost of opening the TCP connection each time a request came in was fairly expensive, and we decided to implement a no-op routine to keep the connection open. It is more complex, but as long as your end point is not trying to serve many many clients then pinning a connection sounds like a viable solution.
You could also set it up to have a timeout, if you want to prevent keeping a connection open when there is no traffic. Five minutes of no activity then shut down the connection.

Categories

Resources