Apporaches Similiar to MultiTenancy - c#

Any other options to implement this Concept Except MultiTenacny ?
Share the same physical instance of the application across customers(tenants) instead of having separate instance for each of the users for better scalability as launching individual physical instance for every user might be costly and difficult to maintain.
Eg: Data in DB should be same but when ORG1 Sees data it should get it's own Instance and changes made by ORG1 shouldn't be visible to ORG2.
when data changes are made by ORG2 by user X it should be visible to User Y of same ORG but not to any users of ORG1.
i have tried https://github.com/mspnp/multitenant-saas-guidance/blob/master/get-started.md article mentioned on GIT but it's a Multitenant approach.

Related

How to implement a C# Winforms database application with synchronisation?

Background
I am developing a C# winforms application - currently up to about 11000 LOC and the UI and logic is about 75% done but there is no persistence yet. There are hundreds of attributes on the forms. There are 23 entities/data classes.
Requirement
The data needs to be kept in an SQL database. Most of the users operate remotely and we cannot rely on them having a connection so we need a solution that maintains a database locally and keeps it in synch with the central database.
Edit: Most of the remote users will only require a subset of the database in their local copy. This is because if they don't have access permissions (as defined and stored in my application) to view other user's records, they will not receive copies of them during synchronisation.
How can I implement this?
Suggested Solution
I could use the Microsoft Entity Framework to create a database and the link between database and code. This would save a lot of manual work as there are hundreds of attributes. I am new to this technology but have done a "hello world" project in it.
For data synch, each entity would have an integer primary key ID. Additionally it would have a secondary ID column which relates to the central database. This secondary column would contain nulls in the central database but would be populated in the local databases.
For synchronisation, I would write code which copies the records and assigns the IDs accordingly. I would need to handle conflicts.
Can anyone foresee any stumbling blocks to doing this? Would I be better off using one of the recommended solutions for data sychronisation, and if so would these work with the entity framework?
Synching data between relational databases is a pain. Your best course of action is probably dependent on: how many users will there be? How probably are conflicts (i.e. that the users will work offline on the same data). Also possibly what kind of manpower do you have (do you have proper DBAs/Sql Server devs standing by to assist with the SQL part, or are you just .NET devs).
I don't envy you this task, it smells of trouble. I'd especially be worried about data corruption and spreading that corruption to all clients rapidly. I'd put extreme countermeasures in place before any data in the remote DB gets updated.
If you predict a lot of conflicts - the same chunk of data gets modified many times by multiple users - I'd probably at least consider creating an additional 'merge' layer to figure out, what is the correct order of operations to perform on the remote db.
One thought - it might be very wrong and crazy, but just the thing that popped in my mind - would be to use JSON Patch on the entities, be it actual domain objects or some configuration containers. All the changes the user makes are recorded as JSON Patch statements, then applied to the local db, and when the user is online - submitted - with timestamps! - to merge provider. The JSON Patch statements from different clients could be grouped by the entity id and sorted by timestamp, and user could get feedback on what other operations from different users are queued - and manually make amends to it. Those grouped statments could be even stored in a files in a git repo. Then at some pre-defined intervals, or triggered manually, the update would be performed on a server-side app and saved to the remote db. After this the users local copies would be refreshed from server.
It's just a rough idea, but I think that you need something with similar capability - it doesn't have to be JSON Patch + Git, you can do it in probably hundreds of ways. I don't thing though, that you will get away with just going through the local/remote db and making updates/merges. Imagine the scenario, where user updates some data (let's say, 20 fields) offline, another makes completely different updates to 20 fields, and 10 of those are common between the users. Now, what should the synch process do? Apply earlier and then latter changes? I'm fairly certain that both users would be furious, because their input was 'atomic' - either everything is changed, or nothing is. The latter 'commit' must be either rejected, or users should have an option to amend it in respect of the new data. That highly depends what your data is, and as I said - what will be number/behaviour of users. Duh, even time-zones become important here - if you have users all in one time-zone you might get away with having predefined times of day when system synchs - but no way you'll convince people with many different business hours that the 'synch session' will happen at e.g. 11 AM, when they are usually giving presentation to management or sth ;)

Role based user permission handling in a application

In a windows forms payroll application employing MVP pattern (for a small scale client) I'm planing user permission handling as follows.(Roll based)
NOTE : System could be simultaneously used by few users (maximum 3) and the database is at the server side.
User Tables in the database.
USER (user_id[pk], name, access_level, status)
PERMISSION (permission_id[pk], permission_detail)
USER_PERMISSION (user_id[pk][fk], permission_id[pk][fk])
I would maintain user list in USER table and permission list in PERMISSION table (permission details are the accessible module names). Intermediate table USER_PERMISSION would map the users with the permissions. UESR and PERMISSION tables have 1:M relationships with USER_PERMISSION table.
When an user login to the system, first the system will validate the user and if its valid then the home screen will be shown and the logged user's ID will be hold in a global variable (accessible to all presenter classes) . When the user try to access a specific module, the system will read that global variable to find the current user's ID and then it will look in the USER_PERMISSION table whether there is an entry relevant to that user id and the module name in which the user is trying to login. If there is and entry, then the user will be given the access to that particular module.
When user log off the variable holding the current user id will be cleared.
In this approach is it okay to hold the current user's ID in application memory? or should be written to a local file?
Modifications to the data in the tables should be tracked and in this purpose should I maintain a separate column on each table (ones should be monitored) to hold the ID of the user who is modifying the record?
EDIT:
Can we use SQL-SERVER user rolls/ logins in this purpose? and can this user action login stuff be handed over to SQL-SERVER?
When controlling Read/Write permissions in Forms, the respective Presenter handle the logic and set the properties in the View (properties like IsModifyAllowed{get;set;}, IsDeleteAllowed{get;set;} etc.) according to the current users permissions. So that the View could handle the rest of the things by enabling / disabling controllers in the View.
In this approach should the every Model have a matching property like in the view(in this case IsModifyAllowed{get; set;} etc. )?
What is the most widely used approach in this case?
What you have described in your first part of the question is pretty common, although it's not actually Role based, it's permission based.
It's not a perfect solution, although no security mechanism really is. But it's pretty simple and works.
To answer your questions.
There shouldn't be a problem with holding the id in memory, so long as we're not talking about government level security here, and there is no real concern about people breaking into the machines and trying to gain access, in which case there are probably much bigger fish to fry. Storing it in a file may actually make it less secure, and you would eventually have to read it into memory at some point anyways.
Tracking changes can be simple or complex, depending on how you want to do it. You can add a last modified field, but this will only track the most recent change. To be safe, you need an audit table that tracks all changes and keeps historical versions of the data. It's probably a good idea to do this audit table with a trigger so that your application code doesn't have to remember to do it.
Yes, you can use SQL server logins and roles, but this probably won't make things easier or less complex. With your model, you're controlling access to modules via a permissions table. Using SQL Server Roles, you would have to control access via data, and react to exceptions thrown for not being able to access things, or query the database for roles and have to do things in tables anyways. If you have Windows domain, you might want to consider using Active Directory instead.
I don't completely follow what you're saying about Views and Model properties, because you haven't adequately explained your models.
There is no "most widely used approach", everyone does it differently. Although there are a number of things people tend to do. Microsoft offers a number of approaches, for instance they have what's known as the Composite UI Application Block and Authorization Manager. You can read about an interesting impelementation here: Granular Role Based Security. Jesse Liberty offers another take Here
In short, this is something you will have to work out yourself, because there are literally thousands of ways people have done this (if not millions). Do some research, and try to come up with what works best in your situation.

Available options for maintaining data consistency/sync across multiple systems

My question about what the best , tried and tested (and new?) methods out there to do a fairly common requirement in most companies.
Every company has customers. And lets say a company A has about 10 different systems for its business needs.Customer is critical to all systems.
Customer can be maintained in any of the systems independently but if they fall out of sync then it’s not good. I know it’s ideal to keep one big master place/System for customer record and have all other systems take that information from that single location/system.
How do you build something like this.. SOA? ETLs? Webservice? Etc.. any other ideas out there that are new … and not to forget old methods.
We are a MS / .NET shop. This is mostly for my knowledge and learning.. please point me in right direction and I want to be aware of all my options.
Ideally all your different systems would share the same database, in which case that database would be the master. However that's almost never the case.
So the most common method I've seen is to have yet another system (lets call it a data warehouse) that takes feeds from your 10 different systems, aggregates them together, and forms a "master" view of a customer.
I have not done anything like this, but playing with the idea here are my thoughts. Perhaps something will be helpful.
This is a difficult question, and I'd say it mainly depends on what development ability and interfaces you have available in each of the 10 systems. You may need a data warehouse manager piece of software working like my next paragraph says with various plugins for all the different types of interfaces in the 10 systems involved.
Thinking from the data warehouse idea: Ideally each Customer in each system would have a LastModified field, although that is probably unlikely. So you'd almost need to serialize the Customer record from each source, store it in your data warehouse database with the last time the program updated that record. This idea would allow you to know exactly what record is the newest any time anything changes in any of the 10 systems and update fields based on that. This is about the best you could do if you're not developing some of the systems, only able to read from some fashion of an interface.
If you are developing all the systems, then I'd imagine WCF interfaces (I mention WCF because they have more connection options than webservices in general) to propagate updates to all the other systems (probably via a master hub application) might be the simplest option. Passing in the new values and the date it was updated, either from an event on the save button, or checking a LastModified field every hour/day.
Another difficulty is what happens if one Customer object has an Address field and another does not, will the updates between those two overwrite each other in any cases? Or if one had a CustomerName and another has CustomerFirstname and CustomerLastname
NoSQL ideas of variable data structure and ability to mark cached values as dirty also somewhat come to mind, not sure how much benefit those concepts would really add though.

Database design - Help desk application

I can't decide whether to keep the help desk application in the same database as the rest of the corporate applications or completely separate it.
The help desk application can log support request from a phone call, email, website.
We can get questions sent to us from registered customers and non-registered customers.
The only reason to keep the help desk application in the same database is so that we can share the user base. But then again we can have the user create a new account for support or sync the user accounts with the help desk application.
If we separate the help desk application, our database backup will be smaller. Or we can just keep the help desk application in the same database, which makes development/integration a lot easier overall, having only one database to backup. (Maybe larger but still one database with everything.)
What to do?
I think this is a subjective answer, but I would keep the help desk system as a separate entity, unless there is a good business reason to use the same user base.
This is mostly based on what I've seen in professional helpdesk call logging/ticket software, but I do have another compelling reason - security - logic is as follows:
Generally, a helpdesk ticketing system generally needs less sensitive information than other business system (accounting, shopping, CRM, etc). Your technicians will likely need to know how to contact a customer, but probably won't need to store full addresses, birth dates, etc. All of the following is based on an assumption - that your existing customer data contains sensitive or personally identifiable data that would not be needed by your ticketing system.
Principle 1: Reducing the attack surface area by limiting the stored data. Generally, I subscribe to the principle that you should ONLY collect the data you absolutely need. Having less sensitive information available means less that an attacker can steal.
Principle 2: Reducing the surface area by minimizing avenues of attack into existing sensitive data. Assuming you already have a large user base, and assuming that you're already storing potentially useful data about your customers, adding another application with hooks into that data is just adding further avenues of attack into the existing customer base. This leads me to...
Principle 3: Least privilege. The user you set up for the helpdesk software database should have access ONLY to the data absolutely needed by your helpdesk analysts. Accomplishing this is easier if you design your database with a specific set of needs in mind. It's a lot more difficult from a maintenance standpoint to have to set up views and stored procedures over sensitive data in order to only allow access to the non-sensitive data than it is to have a database designed to have only the data that you need.
Of course, I may be over-thinking it. And there are other compelling reasons for going either route. I'm just trying to give you something to think about.
This will definitely be a subjective answer based upon your environment. You have to weigh the benefits/drawbacks of one choice with the benefits/drawbacks of the other choice. However, my opinion would be that the best benefits will be found in separating the two databases. I really don't like to have one database with two purposes. Instead look to create a database with one purpose only. Here are the benefits I see to doing this:
Portability - if you decide to move the helpdesk to a different server, you can without issue. The same is true if you want to move the corporate database somewhere else
Separation of concerns - each database is designed for its own purpose. The security of one won't interfere with the security of the other.
Backup policies - Currently, you can only have one backup policy for both systems since they are in the same database. If you split them, you could back up one more often than the other (and the backup would be smaller/faster).
The drawbacks I see (not being able to access the corporate data as easily) actually come out as a positive in my mind. Accessing the data from the corporate database sounds good but it can be a security issue (also a maintainability issue). Instead, this way you can limit how much access (and what type of access) is granted to the helpdesk system. Databases can access each other fairly easily so it won't be that inconvenient and it will allow you to add a nice security barrier between your corporate data and your helpdesk data.

Monitoring group membership in Active Directory more efficiently (C# .NET)

I've got an Active Directory synchronization tool (.NET 2.0 / C#) written as a Windows Service that I've been working on for a while and have recently been tasked with adding the ability to drive events based on changes in group membership. The basic scenario is that users are synchronized with a security database and, when group membership changes, the users need to have their access rights changed (ie. if I am now a member of "IT Staff" then I should automatically receive access to the server room, if I am removed from that group then I should automatically lose access to the server room).
The problem is that when doing a DirectorySynchronization against groups you receive back the group that has had a member added/removed, and from there when you grab the members list you get back the list of all members in that group currently not just the members that have been added or removed. This leads me to quite an efficiency problem - that being that in order to know if a user has been added or removed I will have to keep locally a list of each group and all members and compare that against the current list to see who has been added (not in local list), and who has been deleted (in local list, not in current members list).
I'm debating just storing the group membership details in a DataSet in memory and writing to disk each time I've processed new membership changes. That way if the service stops/crashes or the machine is rebooted I can still get to the current state of the Active Directory within the security database by comparing the last information on disk to that from the current group membership list. However, this seems terrible inefficient - running through every member in the group to compare against what is in the dataset and then writing out changes to disk each time there are changes to the list.
Has anyone dealt with this scenario before? Is there some way that I haven't found to retrieve only a delta of group members? What would you do in this situation to ensure that you never miss any changes while taking the smallest performance hit possible?
Edit: The AD might contain 500 users, it might contain 200,000 users - it depends on the customer, and on top of that how many groups the average user is a member of
You can set up auditing for the success of account modifications in group policy editor
You may then monitor security log for entries and handle log entries on account modifications.
E.g.
EventLog myLog = new EventLog("Security");
// set event handler
myLog.EntryWritten += new EntryWrittenEventHandler(OnEntryWritten);
myLog.EnableRaisingEvents = true;
Make sure that you have privileges to acces Security event log
http://support.microsoft.com/kb/323076
I'd say it depends on how many active directory objects you need to keep track of. If it's a small number (less than 1000 users) you can probably serialize your state data to disk with little noticable performance hit. If you're dealing with a very large number of objects it might be more efficient to create a simple persistence schema in something like SQL Express and use that.
You know there are products which help you with directory synchronization and user provisioning (google those terms)? Not invented here and all that, and you may have to justify the investment in the current climate, but developing and maintaining code for which there already is a commercial solution is not, let us say, always the most cost-effective way in the long run.
Not all of the support eventing/provisioning, but they do support tracking changes and distributing them: it's not a big deal creating eventing solutions on top of those capabilities.
Microsoft has the Identity Integration Server (MISS) which is being repackaged as part of Identity Lifecycle Manager. It was originally built on a more general meta/master data management product, but is workable. IBM has the Tivoli Directory Integrator (but you need to keep up with the biyearly name changes!). Oracle has an Oracle Identity Manager, and Sun an Identity Manager. Most of these are leading products bought by the major players to fill a gap in their portfolios.
Of course, these are enterprise-class products, meaning large & expensive, but generally pretty future-safe and extensible. If you don't need their full strength (yet!), you'll need to look at storing a copy for yourself. In that case, have you considered storing your replica of the last known AD tree using AD LDS (formerly AD/AM)? It's not in an optimum format for comparing differences, but a directory database will scale reasonably well, even the lightweight kind.

Categories

Resources