I'm using Sync Services in a C# application. When my client syncs after a long wait, they are told that tracking info is gone and to re-init the database.
I can re-init, but what if the client has data that needs to be sent to the server? In this case, it's going to be lost. Is there any graceful solution to this problem?
If you get this error, you can change your synchronization type in code to upload only, then resync.
Then, when successful, drop your local table and download again, following your reinitialization.
You need to consider the time that the server is storing changes for. My rule of thumb is at least double the expected disconnect time.
Shout if you need more on this..
Related
I have an application with one DB which is used by many users. Whenever one user makes changes, we save the changes to the database.
Now, I need to notify other logged-in users about this change. How can this be done?
I'm thinking - when the application succcessfully saves / updates the data in the database, the application will send a notification to the connected clients with the new record updated or added.
I'm using C# and SQL Server database.
Your immediate options are push-based notifications with something like a message bus, or polling loops on known ids.
Message busses operate on publish-subscribe models which work well for Windows applications. Have a look at MassTransit or MSMQ as a starting point. There are plenty of options out there. They can be combined into web apps using something that essentially wraps a polling loop with the client like SignalR.
Polling-based options work typically on a timer and do quick timestamp or version # checks against the database, reloading a record if a difference is found.
Push-based are more efficient but only notify of changes between supported applications. (Client to Client) they don't detect changes by applications that don't participate as publishers, nor do they see changes made directly to the database.
Polling-based options cover all changes, but are generally slower and require a schema that has version information to work effectively.
I'm currently working on a requirement that is to "replace the previously developed Polling mechanism for change notifications of database".
Let me elaborate a little:
We have an oracle database where we have put some triggers to get notified for any changes on the table. Using it, we were trying to get changed data and converting it into an XML/Json which is the request-body of an WEBAPI to perform a POST operation in another database.
The new requirement is to skip the polling mechanism and come up with something like "rather than we call the database for notifications, it calls us every time it gets updated".
I did a little googling and everyone suggest for the best approach as:
Database Change Notifications. Here I need to grant permissions to Oracle and then create an application in .Net where I can write a callback function for future processing. Until here, I'm good but my question is:
The .Net application I need to create that communicates with the database is required to be a Web application and has to be online always? Can I create a console application to get notified, if yes, how will the database contact my application for any change? What exactly is the internal process going on when the database notifies my application for any change?
I developing a C# application that storing data in Azure SQL Database.
As you probably know, Azure SQL Database is placed somewhere on the Internet. Not over LAN network (but this question also relevant for reliable network like LAN).
I've noticed that from time-to-time that I'm getting errors like "Connection is closed" (or another network errors). It's really easy to simulate this with Clumsy. The reasons for those errors are bad network conditions.
So, my first idea to solve this is "try again". When I getting this error, I simply try again and then it's working good. Like a magic.
This maybe solving the problem, but, open another kind of problems. Not all the situations are good with this solution. I'll explain:
I'll separate the scenarios for two types:
Retry cant make any damage - operation like SELECT or DELETE. Retrying will have the same expected result. So, with this type of problems - my solution is working fine!
Insert or Update - retry will damage the information.
I'll focus the the point number 2. For example, let's say I have:
A users table. Columns in this table: ID, UserName, Credits.
Store Procedure that make the user (by user id) pay some of his credits.
The "Pay" Stored Procedure is:
UPDATE tblUsers SET [Credits] -= #requestedCredits WHERE ID=#ID
Calling the SP is tricky problem:
If this will work without problem - we are fine.
If it will fail, we don't know whether the operation is done on the DB or not. Retrying here can lead to that the user will pay twice!
So, "Retry" strategy here is not an option.
Solutions I'm thought on:
I'm though to solve this problem by adding a "VersionID" for each row. My SP now:
UPDATE tblUsers SET [Credits] -= #requestedCredits, VersionId=NEWID() WHERE ID=#ID AND VersionID=#OldVersionId
Before making the user Pay(), I'll check the VersionID (Random GUID) and if this GUID wasn't changed after network failure while paying, I'll try again (proof that the data wasn't changed on the DB). If this VersionId changed, so the user is paid for the service.
The problem is when I using multiple machines at same time, this making this solution problematic. Because another instance maybe made a Pay() on the version-id and I'll think that my change is executed by me (which wrong).
What to do?
It sounds like you are making SQL queries from a local/on-premise/remote (i.e. non-Azure property) to a SQL Azure database.
Some of the possible mechanisms of dealing with this are
Azure hosted data access layer with API
Consider creating a thin data access layer API hosted on Azure WebApp or VM to be called from the remote machine. This API service can interact with SQL Azure reliably.
SQL is more sensitive to timeout and network issues than say a HTTP endpoint. Especially if your queries involve transfer of large amounts of data.
Configure an increased timeout
The database access mechanism being used by the C# application is not specified in the question. Many libraries or functions for data access allow you to specify an increased timeout for the connection.
Virtual Private Network
Azure allows you to you create a site-to-site or point-to-site VPN with better network connectivity. However, this is the least preferred mechanism.
You never blindly retry. In case of error you read current state then re-apply the logic and then write the new state. What 'apply the logic' means will differ from case to case. Present the user again with the form, refresh a web page, run a method in your business logic, anything really.
The gist of it is that you can never simply retry the operation w/o first reloading the persisted state. The only truth is what's in the DB and the error is big warning that your cached state is stale.
The scenario is that our client owns and manages a system (we wrote it) hosted at their clients premises. Their client is contractually restricted from changing any data in the database behind the system but they could change the data if they chose because they have full admin rights (the server is procured by them and hosted on their premises).
The requirement is to get notification if they change any data. For now, please ignore deleting data, this discussion is about amendments to data in tables.
We are using Linq to Sql and have overridden the data context so that for each read of the data, we compare a hash of the rows data against a stored hash, previously made during insert/update, held on each row in the table.
We are concerned about scalability so I would like to know if anyone has any other ideas. We are trying to get notified of data changes in SSMS, queries run directly on the db, etc. Also, if someone was to stop our service (Windows service), upon startup we would need to know a row had been changed. Any thoughts?
EDIT: Let me just clarify as I could have been clearer. We are not necessarily trying to stop changes being made (this is impossible as they have full access) more get notified if they change the data.
The answer is simple: to prevent the client directly manipulating the data, store it out of their reach in a Windows Azure or Amazon EC2 instance. The most they will be able to do is get the connection string which will then connect them as a limited rights user.
Also, if someone was to stop our service (Windows service), upon startup we would need to know a row had been changed.
You can create triggers which will write whatever info you want to an audit table, you can then inspect the audit table to determine changes made by your application and directly by the client. Auditing database changes is a well known problem that has been solved many times before, there is plenty of information out there about it.
for each read of the data, we compare a hash of the rows data against a stored hash
As you can probably guess, this is painfully slow and not scalable.
I have a website that takes user input, processes it, and then adds a record to a sql table.
I ran into a problem this weekend, where the SQL server was acting up and left the user with a really long processing time with a timeout response at the end. On top of that, the processed data was lost.
Ultimately, I want to know if it's possible to somehow keep this processed data stored somewhere until SQL is working again, and then add the records?
I imagine that this might be done with web services? Or can it be done in asp.net code behind?
We dealt with this scenario awhile back when we had a fax server that was responsible for processing incoming faxes and storing them in a database, but the database was less than reliable.
In this case, if we couldn't get to SQL Server, we would serialize the data to a queue on disk and set a flag in the application indicating that SQL Server was offline. Any subsequent submissions would be stored in the disk queue when this flag was set.
We would then check SQL Server regularly to see if it was back up and, when it was, we would process each of the files in the queue and then turn the offline flag off.
In ASP.Net, once SQL Server is offline, you could start a thread that monitors SQL Server and, when it comes back online, perform this processing.
However, in the case that you have described, it sounds like either someone started a transaction and didn't finish it or a maintenance operation (DBCC, backup) was taking place.
If this happens regularly, you will probably need to set a CommandTimeout that is slightly longer than the expected normal duration (say double) and, if the operation doesn't complete in that time frame, either tell the user there is a problem or go into caching mode.