Database Data Protection - c#

The scenario is that our client owns and manages a system (we wrote it) hosted at their clients premises. Their client is contractually restricted from changing any data in the database behind the system but they could change the data if they chose because they have full admin rights (the server is procured by them and hosted on their premises).
The requirement is to get notification if they change any data. For now, please ignore deleting data, this discussion is about amendments to data in tables.
We are using Linq to Sql and have overridden the data context so that for each read of the data, we compare a hash of the rows data against a stored hash, previously made during insert/update, held on each row in the table.
We are concerned about scalability so I would like to know if anyone has any other ideas. We are trying to get notified of data changes in SSMS, queries run directly on the db, etc. Also, if someone was to stop our service (Windows service), upon startup we would need to know a row had been changed. Any thoughts?
EDIT: Let me just clarify as I could have been clearer. We are not necessarily trying to stop changes being made (this is impossible as they have full access) more get notified if they change the data.

The answer is simple: to prevent the client directly manipulating the data, store it out of their reach in a Windows Azure or Amazon EC2 instance. The most they will be able to do is get the connection string which will then connect them as a limited rights user.
Also, if someone was to stop our service (Windows service), upon startup we would need to know a row had been changed.
You can create triggers which will write whatever info you want to an audit table, you can then inspect the audit table to determine changes made by your application and directly by the client. Auditing database changes is a well known problem that has been solved many times before, there is plenty of information out there about it.
for each read of the data, we compare a hash of the rows data against a stored hash
As you can probably guess, this is painfully slow and not scalable.

Related

Connecting database from published WPF?

I have a WPF application that:
Takes user input, stores it into a database
Reads from a database and displays it on the screen
Currently is just using SqlConnection method to execute and query a SQL Server database
When deployed, this application will have multiple within network users that should be able to connect to the application and read/write to it as well. Of course, this database is user access controlled, and the end users don't have access to the SQL Server instance. The only ways I can think of connecting are:
Using a generic account that has access to the database and then including that in the connection string.
Creating a REST API to pass requests to the database, bit unsure on details.
What would be the best way to go about this?
REST API would add a level of complexity and additional infrastructure requirements to your application. It would also add an opportunity to use the application outside your network, so that may be a plus. However, if that's not the anticipated use case, it's probably overkill
Also, REST would still need an account to access the database, so it's not really better than your first idea. Depending on the WPF part, you may also have to change the access to the data (for example, using web service clients instead of EF).
Perhaps you can add your users to the database and give them limited privileges to only access selected tables, views or stored procedures. This can add a fine grained control of who and what on the database level. However, this requires a bit or lot of work, depending on the number of your users.
So, your first idea is the easiest one and can probably be expanded to the separate database accounts for your users, while REST probably requires a bit of additional work and setting up the web server etc.

How to get notified on a database change in Oracle using C#?

I'm currently working on a requirement that is to "replace the previously developed Polling mechanism for change notifications of database".
Let me elaborate a little:
We have an oracle database where we have put some triggers to get notified for any changes on the table. Using it, we were trying to get changed data and converting it into an XML/Json which is the request-body of an WEBAPI to perform a POST operation in another database.
The new requirement is to skip the polling mechanism and come up with something like "rather than we call the database for notifications, it calls us every time it gets updated".
I did a little googling and everyone suggest for the best approach as:
Database Change Notifications. Here I need to grant permissions to Oracle and then create an application in .Net where I can write a callback function for future processing. Until here, I'm good but my question is:
The .Net application I need to create that communicates with the database is required to be a Web application and has to be online always? Can I create a console application to get notified, if yes, how will the database contact my application for any change? What exactly is the internal process going on when the database notifies my application for any change?

Reliable connection with Azure SQL Database

I developing a C# application that storing data in Azure SQL Database.
As you probably know, Azure SQL Database is placed somewhere on the Internet. Not over LAN network (but this question also relevant for reliable network like LAN).
I've noticed that from time-to-time that I'm getting errors like "Connection is closed" (or another network errors). It's really easy to simulate this with Clumsy. The reasons for those errors are bad network conditions.
So, my first idea to solve this is "try again". When I getting this error, I simply try again and then it's working good. Like a magic.
This maybe solving the problem, but, open another kind of problems. Not all the situations are good with this solution. I'll explain:
I'll separate the scenarios for two types:
Retry cant make any damage - operation like SELECT or DELETE. Retrying will have the same expected result. So, with this type of problems - my solution is working fine!
Insert or Update - retry will damage the information.
I'll focus the the point number 2. For example, let's say I have:
A users table. Columns in this table: ID, UserName, Credits.
Store Procedure that make the user (by user id) pay some of his credits.
The "Pay" Stored Procedure is:
UPDATE tblUsers SET [Credits] -= #requestedCredits WHERE ID=#ID
Calling the SP is tricky problem:
If this will work without problem - we are fine.
If it will fail, we don't know whether the operation is done on the DB or not. Retrying here can lead to that the user will pay twice!
So, "Retry" strategy here is not an option.
Solutions I'm thought on:
I'm though to solve this problem by adding a "VersionID" for each row. My SP now:
UPDATE tblUsers SET [Credits] -= #requestedCredits, VersionId=NEWID() WHERE ID=#ID AND VersionID=#OldVersionId
Before making the user Pay(), I'll check the VersionID (Random GUID) and if this GUID wasn't changed after network failure while paying, I'll try again (proof that the data wasn't changed on the DB). If this VersionId changed, so the user is paid for the service.
The problem is when I using multiple machines at same time, this making this solution problematic. Because another instance maybe made a Pay() on the version-id and I'll think that my change is executed by me (which wrong).
What to do?
It sounds like you are making SQL queries from a local/on-premise/remote (i.e. non-Azure property) to a SQL Azure database.
Some of the possible mechanisms of dealing with this are
Azure hosted data access layer with API
Consider creating a thin data access layer API hosted on Azure WebApp or VM to be called from the remote machine. This API service can interact with SQL Azure reliably.
SQL is more sensitive to timeout and network issues than say a HTTP endpoint. Especially if your queries involve transfer of large amounts of data.
Configure an increased timeout
The database access mechanism being used by the C# application is not specified in the question. Many libraries or functions for data access allow you to specify an increased timeout for the connection.
Virtual Private Network
Azure allows you to you create a site-to-site or point-to-site VPN with better network connectivity. However, this is the least preferred mechanism.
You never blindly retry. In case of error you read current state then re-apply the logic and then write the new state. What 'apply the logic' means will differ from case to case. Present the user again with the form, refresh a web page, run a method in your business logic, anything really.
The gist of it is that you can never simply retry the operation w/o first reloading the persisted state. The only truth is what's in the DB and the error is big warning that your cached state is stale.

What is the best way to sync multiple SqlServers to one SQL Server 2005?

I have several client databases that use my windows application.
I want to send this data for online web site.
The client databases and server database structure is different because we need to add client ID column to some tables in server data base.
I use this way to sync databases; use another application and use C# bulk copy with transaction to sync databases.
My server database sql server is too busy and parallel task cannot be run.
I work on this solution:
I use triggers after update, delete, insert to save changes in one table and create sql query to send a web service to sync data.
But I must send all data first! Huge data set (bigger than 16mg)
I think can't use replication because the structure and primary keys are different.
Have you considered using SSIS to do scheduled data synchronization? You can do data transformation and bulk inserts fairly easily.
As I understand what you're trying to do, you want to allow multiple client applications to have their data synchronized to a server in such a way that the server has all the data from all the sites, but that each record also has a client identifier so you can maintain traceability back to the source.
Why must you send all the data to the server before you get the other information setup? You should be able to build all these things concurrently. Also, you don't have to upload all the data at one time. Stage them out to one per day (assuming you have a small number of client databases), that would give you a way to focus on each in turn and make sure the process was completed accurately.
Will you be replicating the data back to the clients after consolidating all the data into one table? Your size information was miscommunicated, were you saying each database was larger than 16GB? So then 5 sites would have a cumulative size of 80GB to be replicated back to the individual sites?
Otherwise, the method you outlined with using a separate application to specifically handle the uploading of data would be the most appropriate.
Are you going to upgrade the individual schemas after you update the master database? You may want to ALTER TABLE and add a bool column to every record and mark them as "sent" or "not sent" to keep track of any records that were updated/inserted "late". I have a feeling you're going to be doing a rolling deployment/upgrade and you're trying to figure out how to keep it all "in sync" without losing anything.
You could use SQL Server Transactional Replication: HOW TO: Replicate Between Computers Running SQL Server in Non-Trusted Domains or Across the Internet

Protecting app database access on user PC

Greetings!
I'm needing to deploy a compact database with an application I am working on. The database acts as a cache for data the app has already seen, and that data will never change, so the cached values will never become outdated. I've chosen SQLite, and I'm writing in C#.
I'd like to protect the database files so they cannot be easily accessed or edited by the user - keeping access to my application only. Now, one option is to use password protection which is fine except that with tools like Reflector one could easily view a near original version of the source and check the passwords/how they are generated per file and replicate this.
Are there any suggestions on how to achieve this result or something close? Have people done something like this in the past?
Thanks!
Security by obscurity.
If your apps can decrypt it, then your user can do it too.
If you want to keep it secure, you'll have to keep it for yourself. Your best bet is to store the database on a server and make it available via a web service. Perform access control checks on your own server so that the application can only access the parts of the database it has to see.
I don't have a clearcut answer for you (obfuscate your code during release deployment, make the password obscenely long) as the golden rule stands: If they have physical access to the executable (substitute machine/car/door) they can get in if they want(and have skills).
All you can do is make things difficult for them.
This area is not my forte, but one thing I could suggest is to just think about what data you are actually sending and determine if there is any way that you can limit any of the more sensitive data from being transmitted to the client in the first place.
If your concern is over sending things like ID numbers account numbers to the client, then perhaps you could translate those values into a client-only version that is meaningless outside of your application. Your server could have a table that contains the translation between the real values and the client-only values.
Let's say you have this table stored in your server's database (not the client database!)
RealAccountNumber ClientOnlyAccountNumber
981723 ABC123
129847 BCD234
923857 CDE345
...
So the client only sees the account numbers in the ClientOnlyAccountNumber column, and when a client sends a request to the server for an action to be performed on account "ABC123", the server knows to translate that into account number 981723.

Categories

Resources