I developing a C# application that storing data in Azure SQL Database.
As you probably know, Azure SQL Database is placed somewhere on the Internet. Not over LAN network (but this question also relevant for reliable network like LAN).
I've noticed that from time-to-time that I'm getting errors like "Connection is closed" (or another network errors). It's really easy to simulate this with Clumsy. The reasons for those errors are bad network conditions.
So, my first idea to solve this is "try again". When I getting this error, I simply try again and then it's working good. Like a magic.
This maybe solving the problem, but, open another kind of problems. Not all the situations are good with this solution. I'll explain:
I'll separate the scenarios for two types:
Retry cant make any damage - operation like SELECT or DELETE. Retrying will have the same expected result. So, with this type of problems - my solution is working fine!
Insert or Update - retry will damage the information.
I'll focus the the point number 2. For example, let's say I have:
A users table. Columns in this table: ID, UserName, Credits.
Store Procedure that make the user (by user id) pay some of his credits.
The "Pay" Stored Procedure is:
UPDATE tblUsers SET [Credits] -= #requestedCredits WHERE ID=#ID
Calling the SP is tricky problem:
If this will work without problem - we are fine.
If it will fail, we don't know whether the operation is done on the DB or not. Retrying here can lead to that the user will pay twice!
So, "Retry" strategy here is not an option.
Solutions I'm thought on:
I'm though to solve this problem by adding a "VersionID" for each row. My SP now:
UPDATE tblUsers SET [Credits] -= #requestedCredits, VersionId=NEWID() WHERE ID=#ID AND VersionID=#OldVersionId
Before making the user Pay(), I'll check the VersionID (Random GUID) and if this GUID wasn't changed after network failure while paying, I'll try again (proof that the data wasn't changed on the DB). If this VersionId changed, so the user is paid for the service.
The problem is when I using multiple machines at same time, this making this solution problematic. Because another instance maybe made a Pay() on the version-id and I'll think that my change is executed by me (which wrong).
What to do?
It sounds like you are making SQL queries from a local/on-premise/remote (i.e. non-Azure property) to a SQL Azure database.
Some of the possible mechanisms of dealing with this are
Azure hosted data access layer with API
Consider creating a thin data access layer API hosted on Azure WebApp or VM to be called from the remote machine. This API service can interact with SQL Azure reliably.
SQL is more sensitive to timeout and network issues than say a HTTP endpoint. Especially if your queries involve transfer of large amounts of data.
Configure an increased timeout
The database access mechanism being used by the C# application is not specified in the question. Many libraries or functions for data access allow you to specify an increased timeout for the connection.
Virtual Private Network
Azure allows you to you create a site-to-site or point-to-site VPN with better network connectivity. However, this is the least preferred mechanism.
You never blindly retry. In case of error you read current state then re-apply the logic and then write the new state. What 'apply the logic' means will differ from case to case. Present the user again with the form, refresh a web page, run a method in your business logic, anything really.
The gist of it is that you can never simply retry the operation w/o first reloading the persisted state. The only truth is what's in the DB and the error is big warning that your cached state is stale.
Related
I have a WPF application that:
Takes user input, stores it into a database
Reads from a database and displays it on the screen
Currently is just using SqlConnection method to execute and query a SQL Server database
When deployed, this application will have multiple within network users that should be able to connect to the application and read/write to it as well. Of course, this database is user access controlled, and the end users don't have access to the SQL Server instance. The only ways I can think of connecting are:
Using a generic account that has access to the database and then including that in the connection string.
Creating a REST API to pass requests to the database, bit unsure on details.
What would be the best way to go about this?
REST API would add a level of complexity and additional infrastructure requirements to your application. It would also add an opportunity to use the application outside your network, so that may be a plus. However, if that's not the anticipated use case, it's probably overkill
Also, REST would still need an account to access the database, so it's not really better than your first idea. Depending on the WPF part, you may also have to change the access to the data (for example, using web service clients instead of EF).
Perhaps you can add your users to the database and give them limited privileges to only access selected tables, views or stored procedures. This can add a fine grained control of who and what on the database level. However, this requires a bit or lot of work, depending on the number of your users.
So, your first idea is the easiest one and can probably be expanded to the separate database accounts for your users, while REST probably requires a bit of additional work and setting up the web server etc.
The scenario is that our client owns and manages a system (we wrote it) hosted at their clients premises. Their client is contractually restricted from changing any data in the database behind the system but they could change the data if they chose because they have full admin rights (the server is procured by them and hosted on their premises).
The requirement is to get notification if they change any data. For now, please ignore deleting data, this discussion is about amendments to data in tables.
We are using Linq to Sql and have overridden the data context so that for each read of the data, we compare a hash of the rows data against a stored hash, previously made during insert/update, held on each row in the table.
We are concerned about scalability so I would like to know if anyone has any other ideas. We are trying to get notified of data changes in SSMS, queries run directly on the db, etc. Also, if someone was to stop our service (Windows service), upon startup we would need to know a row had been changed. Any thoughts?
EDIT: Let me just clarify as I could have been clearer. We are not necessarily trying to stop changes being made (this is impossible as they have full access) more get notified if they change the data.
The answer is simple: to prevent the client directly manipulating the data, store it out of their reach in a Windows Azure or Amazon EC2 instance. The most they will be able to do is get the connection string which will then connect them as a limited rights user.
Also, if someone was to stop our service (Windows service), upon startup we would need to know a row had been changed.
You can create triggers which will write whatever info you want to an audit table, you can then inspect the audit table to determine changes made by your application and directly by the client. Auditing database changes is a well known problem that has been solved many times before, there is plenty of information out there about it.
for each read of the data, we compare a hash of the rows data against a stored hash
As you can probably guess, this is painfully slow and not scalable.
We have an existing big application which contains a lot of data. We'd like to use it as a datasource for various internally written C# web applications, so we don't have more redundant data.
The data we are looking at doesn't change too much, so caching would work fine most of the time. So we are writing a C# Web Service against the data to be reused in various internally written applications.
However roughly once per month, the Oracle database source is unavailable.
What is the best way to handle this in the web service so that those other applications that rely on that data aren't disrupted also?
Set up replication or failover partners? Honestly, this doesn't seem like a job for more code; it sounds like a job for more infrastructure. I know Oracle licenses are expensive, but so is paying developers to work around unavailability.
If you simply had to solve it with code, then the web services should simply retain and return their cached data if any regularly-scheduled DB query fails with a timeout or connection failed-type message. The cached data should be kept as long as necessary in this circumstance, until a call to refresh that data succeeds. If there is no cached data, you can either swallow the error and return nothing, or return an error stating the data is unavailable from both places.
The solution was to use a secondary Cache which doesn't expires.
The secondary cache is updated with the latest values if the first (shorter) cache is successfully updated from the database. If the database querying fails and the first cache has expired, then the first cache is updated by the second cache. So there is always a secondary cache.
Greetings!
I'm needing to deploy a compact database with an application I am working on. The database acts as a cache for data the app has already seen, and that data will never change, so the cached values will never become outdated. I've chosen SQLite, and I'm writing in C#.
I'd like to protect the database files so they cannot be easily accessed or edited by the user - keeping access to my application only. Now, one option is to use password protection which is fine except that with tools like Reflector one could easily view a near original version of the source and check the passwords/how they are generated per file and replicate this.
Are there any suggestions on how to achieve this result or something close? Have people done something like this in the past?
Thanks!
Security by obscurity.
If your apps can decrypt it, then your user can do it too.
If you want to keep it secure, you'll have to keep it for yourself. Your best bet is to store the database on a server and make it available via a web service. Perform access control checks on your own server so that the application can only access the parts of the database it has to see.
I don't have a clearcut answer for you (obfuscate your code during release deployment, make the password obscenely long) as the golden rule stands: If they have physical access to the executable (substitute machine/car/door) they can get in if they want(and have skills).
All you can do is make things difficult for them.
This area is not my forte, but one thing I could suggest is to just think about what data you are actually sending and determine if there is any way that you can limit any of the more sensitive data from being transmitted to the client in the first place.
If your concern is over sending things like ID numbers account numbers to the client, then perhaps you could translate those values into a client-only version that is meaningless outside of your application. Your server could have a table that contains the translation between the real values and the client-only values.
Let's say you have this table stored in your server's database (not the client database!)
RealAccountNumber ClientOnlyAccountNumber
981723 ABC123
129847 BCD234
923857 CDE345
...
So the client only sees the account numbers in the ClientOnlyAccountNumber column, and when a client sends a request to the server for an action to be performed on account "ABC123", the server knows to translate that into account number 981723.
I'm using Sync Services in a C# application. When my client syncs after a long wait, they are told that tracking info is gone and to re-init the database.
I can re-init, but what if the client has data that needs to be sent to the server? In this case, it's going to be lost. Is there any graceful solution to this problem?
If you get this error, you can change your synchronization type in code to upload only, then resync.
Then, when successful, drop your local table and download again, following your reinitialization.
You need to consider the time that the server is storing changes for. My rule of thumb is at least double the expected disconnect time.
Shout if you need more on this..