SQL Connection Over Internet: good practice? - c#

I'm working with a WPF/C# desktop application, and I need to connect with the database server over the internet to a static IP address, but in some cases, an sql connection exception occur because the bad internet ping and latency.
I also know some of the security risks that can be made from this. I'm already parametrized all queries and connecting with encryption.
So I start to think: this is a good practice? And what can I do to increase security and performance?

No, exposing your SQL server over the internet is not a good idea. That doesn't stop you from doing it, but: I wouldn't recommend it, unless the data is freely available and public domain (so it doesn't matter if someone gets access to more than you expected), and is trivial to replace (so it doesn't matter if it gets damaged). And even then I'd probably suggest using a service tier and keeping your database server strictly on the "inside".
Re encryption: that prevents intermediaries from snooping, but it doesn't limit what the user can do. A malicious user with genuine access could simply connect up and do whatever they want, bypassing whatever rules and filters you have in place.

Related

Connecting your application to a database in another PC through the internet

I am new to the idea of connecting my application to an online database and by online, I mean a database from another PC that I need to access by using the internet.
I am not new to accessing a local database, in fact, I made a class that stores all the parameters that I need to connect to a database.
Can anyone help me? What do I need to configure in my SQL Server and in my codes to make it accessible through the internet? I hope someone can help me. Thanks!
In your comments (and question) you mention that you have a specific server that needs to talk to a specific server. There are a few options:
Expose the sql server directly to the internet and use the IP to
connect. THIS IS A BAD IDEA... This opens you up to hacks, port
scans, and generaly bad things.
Use a VPN from one machine to the other and use an IP address within the VPN. As long as your VPN is correctly set up and secure; this negates the security problems in option 1.
Use a web service to expose the SQL server over the internet; require authentication in the web service. You can even tie it to a remote IP so that it only accepts calls from your first machine. This is clean and tidy; it allows for expansion in the future (new machines, non SQL, other functions, etc). However it is the most complex option.
Myself I would use option 3; it may take longer but it is a good way to break apart the functionality and provides a way to expand in the future. However I suspect that option 2 may be your best bet for what you are asking.

Best way to sync remote SQL Server database with local SQL Server Compact database?

I realise this is a much discussed topic but all the suggestions I see seem to involve direct access to the SQL Server which in our instance is not ideal.
Our scenario is a remote SQL Server database with (say) 100 tables. We are developing a lightweight desktop application which will use an SQL Server Compact database and sync a subset of (say) 20 tables with the remote server periodically.
I would like to have control over how the replication occurs because speed is a major issue for us since the remote server is 1000's of miles away.
Also I don't need to sync all the records in each table - only those relevant to each user.
I quite like the SQL Merge facility however it requires that the client be connected to the remote SQL Server. This is currently not possible and we were thinking of interfacing to the remote server through a web service accessed through our application or some other method.
Any suggestions welcome.
UPDATE
Just to clarify, internet connection will be intermittent, that's the main reason why we need to sync the two databases.
The fact that you are using a compact db for the client puts some pretty heavy limitations on you for available options in this scenario.
Given the limitations and the performance requirements you desire, you could consider implementing a service-based http endpoint to keep the desired tables in sync. If your architecture allows for it, doing so asynchronously would boost performance significantly but again, it may not even be viable depending on your architecture.
Something else to consider is using web sockets rather than standard http connections for a web service like mentioned above. That way you could keep the clients synced real-time, since web sockets are true fully duplex real-time connections. The major catch with this is you would either have to ensure all clients are web-socket compliant or provide a fall-back to emulate a websocket connection with an emulation framework for clients that aren't up to par.
Not sure if this helps any.
You have the choice of both Sync Framework (requires more coding and has some other limitations) or Merge Replication, both work over http/https) - see this blog post for a comparision: http://blogs.msdn.com/b/sqlservercompact/archive/2009/11/09/merge-replication-vs-sync-services-for-compact.aspx
Can you not use the MS Sync framework?
Pretty much designed for your scenario AFAIK.
Quick google turned this tutorial up...
http://social.technet.microsoft.com/wiki/contents/articles/2190.tutorial-synchronizing-sql-server-and-sql-server-compact-sync-framework.aspx

Connection Pooling in a small application

I've got a few simple questions about connection pooling and best practices.
I'm planning and writing a small application which relies on a MySQL database. In this application I use modules and plug-ins which can create connections. The application has direct access to the MySQL database and it will probably be the only client connecting to the database.
Here are my first questions: Will connection pooling make sense? Is it irrelevant or should I disable it? What are your experiences?
On the other hand in my company we develop another software which has one MySQL database server and many clients. Every client can open multiple windows in which multiple connections can be active. There is a good chance that this software will be using the basic concept of my new application. The clients connect directly with the database. So I guess it would make a lot of sense to write a server application which handles the pooling and organizes the connections, am I right? How much sense would it make to let every client use it's own connection pool? We're talking about 1-50 clients with 1-10 connections.
Do you think it's the best to write a small server application to handle the connection pooling?
I'm asking because I don't really know when connection pooling makes sense and when not and how to handle it with small and medium sized client applications. I'm looking for some input of your experiences. :) I hope the questions is not to awkward. ^^
Greetings,
Simon
P.S.: It's a windows based application. Not a web service.
Connection pooling will give you extra performance, actually without it performance may be an issue even for a small application (it depends on the number of calls, data, etc).
Consider properly handling your connections in order to avoid 'max pool size reached' errors and timeouts. A good practice is to handle your connection like this:
using (SqlConnection conn = new SqlConnection(myConnectionString))
{
conn.Open();
doSomething(conn);
}
using guaranties than the connection will be properly closed/disposed. Check this article that provides some tips that can be applied either for MSSQL or MySQL.
Consider also the use of stored procedures. Hope this help you getting started.

Scaling an ASP.NET application

This is a very broad question, but hopefully I can get useful tips. Currently I have an ASP.NET application that runs on a single server. I now need to scale out to accommodate increasing customer loads. So my plan is to:
1) Scale out the ASP.NET and web component onto five servers.
2) Move the database onto a farm.
I don't believe I will have an issue with the database, as it's just a single IP address as far as the application is concerned. However, I am now concerns about the ASP.NET and web tier. Some issues I am already worried about:
Is the easiest model to implement just a load balancer that will farm out requests to each of the five servers in a round-robin fashion?
Is there any problem with HTTPS and SSL connections, now that they can terminate on different physical servers each time a request is made? (for example, performance?)
Is there any concern with regards to session maintanence (logon) via cookies? My guess is no, but can't quite explain why... ;-)
Is there any concern with session data itself (stored server side)? Obviously I will need to replicate session state between servers, or somehow force a request to only go to a single server. Either way, I see a problem here...
As David notes, much of this question is really more of an Administrative thing, and may be useful on ServerFault. The link he posts has good info to pore over.
For your Session questions: You will want to look at either the Session State Service (comes with IIS as a separate service that maintains the state in common between multiple servers) and/or storing asp.net session state in a SQL database. Both are options you can find at David Stratton's link, I'm sure.
Largely speaking, once you set up your out-of-process session state, it is otherwise transparent. It does require that you store Serializable objects in Session, though.
Round-Robin DNS is the simplest way to load-balance in this situation, yes. It does not take into account the actual load on each server, and also does not have any provision for when one server may be down for maintenance; anyone who got that particular IP would see the site as being 'down', even though four other servers may be running.
Load balancing and handling SSL connections might both benefit from a reverse proxy type of situation; where the proxy handles all the connections coming in, but all it's doing is encryption and balancing the actual request load to the web servers. (these issues are more on the Administration end, of course, but...)
Cookies will not be a problem provided all the web servers are advertising themselves as being the same web site (via the host headers, etc). Each server will gladly accept the cookies set by any other server using the same domain name, without knowing or caring what server sent it; It's based on the host name of the server the web browser is connecting to when it gets a cookie value.
That's a pretty broad question and hard to answer fully in a forum such as this. I'm not even sure if the question belongs here, or if it should be at serverfault.com. However....
Microsoft offers plenty of guidance on the subject. The first result for "scaling asp.net applications" from BING comes up to this.
http://msdn.microsoft.com/en-us/magazine/cc500561.aspx
I just want to bring up areas you should be concerned about with the database.
First off, most data models built with only a single database server in mind require massive changes in order to support a database farm in a multimaster mode.
If you used auto incrementing integers for your primary keys (which most people do) then you're basically screwed out of the gate. There are a couple ways to temporarily mitigate this but even those are going to require a lot of guesswork and have a high potential of collision. One mitigation involves setting the seed value on each server to a sufficiently high number to reduce the likelihood of a collision... This will usually work, for awhile.
Of course you have to figure out how to partition users across servers...
My point is that this area shouldn't be brushed off lightly and is almost always more difficult to accomplish than simply scaling "up" the database server by putting it on bigger hardware.
If you purposely built the data model with a multi-master role in mind then kindly ignore. ;)
Regarding sessions: Don't trust "sticky" sessions, sticky is not a guarantee. Quite frankly, our stuff is usually deployed to server farms so we completely disable session state from the get go. Once you move to a farm there is almost no reason to use session state as the data has to be retrieved from the state server, deserialized, serialized, and stored back to the state server on every single page load.
Considering the DB and network traffic from just and that their purpose was to reduce db and network traffic then you'll understand how they don't buy you anything anymore.
I have seen some issues related to round robin http/https sessions. We used to use in process sessions and told the load balancers to make the sessions sticky. (I think they use a cookie for this).
It let us avoid SQL sessions but meant that when we switched from http to https, our F5 boxes couldn't keep the stickiness. We ended up changing to SQL sessions.
You could investigate pushing the encryption up to the load balancer. I remember that was a possible solution for our problem, but alas, not one we investigated.
The session database on an SQL server can be easily scaled out with little code & configuration changes. You can stick asp.net sessions to a session database and irrespective of which web server in your farm serves the request, your session-id based sql state server mapping works flawless. This is probably one of the best ways to scale out the ASP.NET Session state using SQL server. For more information, read the link True Scaleout model for session state

How to check if internet connectivity available or not in C#

I have developed a software for a company. For some kind of licensing purpose i am using a remote database to allow/disallow usage of the software. This task is applied every time the user logs into the software. If the internet connection does not exist or the query to the remote database fails, the user gets an error and he can not log into the software and shows the remote database http address (which i dont want him to see, if he carefully read the error)
What i want to know is any way of doing the same procedure, but if the remote database query fails or internet connection is not available, it should bypass the check for the time being, and upon next login try, same procedure is followed. So that my client should not know about this licensing stuff.
How to check the internet connectivity (LAN, WiFi, Dialup or whatever the user is using) before creating a query to the remote database.
Proposed methods:
Ping my remote database server IP.
This Question by Michel
Regarding results what i achieved from Michel' question is no stable solution.
Why not just try to perform a very cheap query on the database? Indeed, you could create a stored procedure for exactly this purpose - it might even give some version information back, etc.
After all, the important thing isn't whether a ping works, or whether the client can go to other machines: the important thing is whether you can talk to the database or not. So test exactly that.
As for what error message is presented to the user - surely that's under your control, so make sure you give appropriate information.
try this:
Check Internet Connection
I'm not sure whether trying to open a connection to your server and catching an exception without visible feedback will work or not in C#, but are you sure you want to use this method to deal with licensing? It strikes me that not only is it very easy to discover (for example a personal firewall will flag the connection) it's also very easy to defeat.
Additionally, what happens if the user's computer isn't connected to the internet? Unlikely, I know, but it can happen, and your licensing scheme is defeated with no conscious effort on the user's part.
you can also try this
C# - Check internet connection

Categories

Resources