Spreading of business logic between DB and client - c#

Ok guys, another my question is seems to be very widely asked and generic. For instance, I have some accounts table in my db, let say it would be accounts table. On client (desktop winforms app) I have appropriate functionality to add new account. Let say in UI it's a couple of textboxes and one button.
Another one requirement is account uniqueness. So I can't add two same accounts. My question is should I check this account existence on client (making some query and looking at result) or make a stored procedure for adding new account and check account existence there. As it for me, it's better to make just a stored proc, there I can make any needed checks and after all checks add new account. But there is pros and cons of that way. For example, it will be very difficult to manage languagw of messages that stored proc should produce.
POST EDIT
I already have any database constraints, etc. The issue is how to process situation there user is being add an existence account.
POST EDIT 2
The account uniqueness is exposed as just a simple tiny example of business logic. My question is more abour handling complicated business logic on that accounts domain.
So, how can I manage this misunderstanding?
I belive that my question is basic and has proven solution. My tools are C#, .NET Framework 2.0. Thanks in advance, guys!

If the application is to be multi-user ( i.e. not just a single desktop app with a single user, but a centralised DB with the app acting as clients maybe on many workstations), then it is not safe to rely on the client (app) to check for such as uniqueness, existance, free numbers etc as there is a distinct possibility of change happening between calls (unless read locking is used, but this often become more of an issue than a help!).
There is the ability of course to precheck and then recheck (pre at app level, re at DB), but of course this would give extra DB traffic, so depends on whether it is a problem for you.
When I write SPROCs that will return to an app, I always use the same framework - I include parameters for a return code and message and always populate them. Then I can use standard routines to call them and even add in the parameters automatically. I can then either display the message directly on failure, or use the return code to localize it as required (or automate a response). I know some DBs (like SQL Svr) will return Return_Code parameters, but I impliment my own so I can leave inbuilt ones for serious system based errors and unexpected failures. Also allows me to have my own numbering systems for return codes (i.e. grouping them to match Enums in the code and/or grouping by severity)
On web apps I have also used a different concept at times. For example, sometimes a request is made for a new account but multiple pages are required (profile for example). Here I often use a header table that generates a hidden user ID against the requested unique username, a timestamp and someway of recognising them (IP Address etc). If after x hours it is not used, the header table deletes the row freeing up the number (depending on DB the number may never become useable again - this doesn;t really matter as it is just used to keep the user data unique until application is submitted) and the username. If completed correctly, then the records are simply copied across to the proper active tables.
//Edit - To Add:
Good point. But account uniqueness is just a very tiny simple sample.
What about more complex requirements for accounts in business logic?
For example, if I implement in just in client code (in winforms app) I
will go ok, but if I want another (say console version of my app or a
website) kind of my app work with this accounts I should do all this
logic again in new app! So, I'm looking some method to hold data right
from two sides (server db site and client side). – kseen yesterday
If the requirement is ever for mutiuse, then it is best to separate it. Putting it into a separate Class Library Project allows the DLL to be used by your WinForm, Console program, Service, etc. Although I would still prefer rock-face validation (DB level) as it is closest point in time to any action and least likely to be gazzumped.
The usual way is to separate into three projects. A display layer [DL] (your winform project/console/Service/etc) and Business Application Layer [BAL] (which holds all the business rules and calls to the DAL - it knows nothing about the diplay medium nor about the database thechnology) and finally the Data Access Layer [DAL] (this has all the database calls - this can be very basic with a method for insert/update/select/delete at SQL and SPROC level and maybe some classes for passing data back and forth). The DL references only the BAL which references the DAL. The DAL can be swapped for each technology (say change from SQL Server to MySQL) without affecting the rest of the application and business rules can be changed and set in the BAL with no affect to the DAL (DL may be affected if new methods are added or display requirement change due to data change etc). This framework can then be used again and again across all your apps and is easy to make quite drastic changes to (like DB topology).

This type of logic is usually kept in code for easier maintenance (which includes testing). However, if this is just a personal throwaway application, do what is most simple for you. If it's something that is going to grow, it's better to put things practices in place now, to ease maintenance/change later.
I'd have a AccountsRepository class (for example) with a AddAcount method that did the insert/called the stored procedure. Using database constraints (as HaLaBi mentioned), it would fail on trying to insert a duplicate. You would then determine how to handle this issue (passing a message back to the ui that it couldn't add) in the code. This would allow you to put tests around all of this. The only change you made in the db is to add the constraint.
Just my 2 cents on a Thrusday morning (before my cup of green tea). :)

i think the answer - like many - is 'it depends'
for sure it is a good thing to push logic as deeply as possible towards the database. This prevent bad data no matter how the user tries to get it in there.
this, in simple terms, results in applications that TRY - FAIL - RECOVER when attempting an invalid transaction. you need to check each call(stored proc, or triggered insert etc) and IF something bad happens, recover from that condition. Usually something like tell the user an issue occurred, reset the form or something, and let them try again.
i think at a minimum, this needs to happen.
but, in addition, to make a really nice experience for the user, the app should also preemptively check on certain data conditions ahead of time, and simply prevent the user from making bad inserts in the first place.
this is of course harder, and sometimes means double coding of business rules (one in the app, and one in the DB constraints) but it can make for a dramatically better user experience.

The solution is more of being methodical than technical:
Implement - "Defensive Programming" & "Design by Contract"
If the chances of a business-rule being changed over time is very less, then apply the constraint at database-level
Create a "validation or rules & aggregation layer (or class)" that will manage such conditions/constraints for entity and/or specific property
A much smarter way to do this would be to make a user-control for the entity and/or specific property (in your case the "Account-Code"), which would internally use the "validation or rules & aggregation layer (or class)"
This will allow you to ensure a "systematic-way-of-development" or a more "scalable & maintainable" application-architecture
If your application is a website then along with placing the validation on the client-side it is always better to have validation even in the business-layer or C# code as well
When ever a validation would fail you could implement & use a "custom-error-message" library, to ensure message-content is standard across the application
If the errors are raised from database itself (i.e., from stored-procedures), you could use the same "custom-error-message" class for converting the SQL Exception to the fixed or standardized message format
I know that this is all a bit too much, but is will always good for future.
Hope this helps.

As you should not depend on a specific Storage Provider (DB [mysql, mssql, ...], flat file, xml, binary, cloud, ...) in a professional project all constraint should be checked in the business logic (model).
The model shouldn't have to know anything about the storage provider.
Uncle Bob said something about architecture and databases: http://blog.8thlight.com/uncle-bob/2011/11/22/Clean-Architecture.html

Related

How to create an application supporting multiple databases

I have a situation where I need to create an application which supports multiple databases. Multiple databases means the client can use any of the database like Oracle, SQL Server, MySQL, PostgreSQL at first.
I was trying to use ORM like NHibernate or MyBatis. But they have their limitation and need expertise to use.
So I decide to user the Data Providers provided by Microsoft like ADO.NET, OLEDB, ODP.NET etc.
Is there any way so that the my logic of database keep same for all the database? I have tried IDbConeection, IDbCommand etc but they have a problem in case of Oracle (Ref Cursor).
I there any way to achieve this? Some link or guide would be appreciated.
Edit:
There is problem with the DBTypes because they are enum define differently with different data providers.
Well, real-life applications are complicated like that. Before you know it, you want to replace the UI with an App, expose your logic as a WCF service, change the e-mail service with another service provider, test pieces of your code while mocking the DAL and change the database with another one.
The usual way to deal with this is to pass all calls through an interface that separates the implementation from the caller. After that, you can implement the different DAL's.
Personally I usually go with this approach:
First create a single DLL that contains all interfaces. Basically the idea is to expose all calls that your UI, App or whatever needs through the interface. From now on, your UI doesn't talk to databases or e-mail providers anymore.
If you need to get access to the interface, you use a factory pattern. Never use 'new'; that will get you in trouble in the long run.
It's not trivial to create this, and needs proper crafting. Usually I begin with a bare minimum version, hack everything else in the UI as a first version, then move everything that touches a DB or a service into the right project while creating interfaces and finally re-engineer everything until I'm 100% satisfied.
Interfaces should be built to last. Sure, changes will happen over time, but you really want to minimize these. Think about what the future will hold, read up on what other people came up with and ensure your interfaces reflect that.
Basically you now have a working piece of software that works with a single database, mail provider, etc. So far so good.
Next, re-engineer the factory. Basically you want to use the configuration settings to pick the right provider (the right DLL that implements your interface) for your data. A simple switch can suffice in most cases.
At this point I usually make it a habit to make a ton of unit tests for the interfaces.
The last step is to create DLL's for the different database providers. One of these will be loaded at run-time in your application.
I prefer simple Linq to SQL (I also use the library from LinqConnect) because it's pretty fast. I simply start by copy-pasting the other database provider, and then re-engineer it until it works. Personally I don't believe in a magic 'support all sql databases' solution anymore: In my experience, some databases will handle certain queries a much, much faster than other databases - which means that you will probably end up with some custom code for each database anyways.
This is also the point where your unit tests are really going to pay off. Basically, you can just start with copy-paste and give it a test. If you're lucky, everything will run right away with decent performance... if not, you know where to start.
Build to last
Build things to last. Things will change:
Think about updates and test them. Prefer automatic tests.
You don't want to tinker with your Factory every day. Use Reflection, Expressions, Code generation or whatever your poison is to save yourself the trouble of changing code.
Spend time writing tests. Make sure you cover the bulk. I cannot stress this enough; under pressure people usually 'save' time by not writing tests. You'll notice that this time that you 'save' will double back on you as support when you've gone live. Every month.
What about Entity Framework
I've seen a lot of my customers get into trouble with performance because of this. In the many times that I've tested it, I had the same experience. I noticed customers hacking around EF for a lot of queries to get a bit of decent performance.
To be fair, I gave up a few years ago, and I know they have made considerable performance improvements. Still, I would test it (especially with complex queries) before considering it.
If I would use EF, I'd implement all EF stuff in a 'database common DLL', and then derive classes from that. As I said, not all databases are the same with queries - and you might want to implement some hacks that are necessary to get decent performance. Your tests will tell.
Bonuses
Other reasons for programming through interfaces has a lot of advantages in combination with proxy's. To name a few, you can easily create log sinks, caching, statistics, WCF, etc. by simply implementing the same interface. And if you end up hating your current OR mapper some day, you can just throw it away without touching a single line of your app.
I believe Microsoft's Data Access Components would be suitable to you.
https://en.wikipedia.org/wiki/Microsoft_Data_Access_Components
How about writing microservices and connect them by using a rest api?
You (and maybe your team) could provide a core application which handles the logic and the ui. This is still based on your current technology. But instead of adding directly some kind of database connection, you could provide multiple types of microservices (based on asp.net or core) providing a rest api. You get your data from each database from such a microservice. So you would develop 1 micro service for e.g. MySQl and another one for MsSQL and when a new customer comes up with oracle you write a new small microservice which handles your expected API.
More info (based on .net core) is here: https://docs.asp.net/en/latest/tutorials/first-web-api.html
I think this is a teams discussion, which kind of technology you decide to use. But today I would recommend writing a micro service. It makes the attachment of a new app for a e.g. mobile device also much easier :)
Yes its possible.
Right now am working with the same scenario where my all logic related data( typically you can call meta data) reside inside one DB and date resides in another DB.
What you need to do. you should have connection related parameter in two different file or you can call these file as prop files. now you need to have connection concrete class which take the parameter from these prop file. so where you need to create connection just supply the prop files and it will created the db connection as desired.

How to implement a C# Winforms database application with synchronisation?

Background
I am developing a C# winforms application - currently up to about 11000 LOC and the UI and logic is about 75% done but there is no persistence yet. There are hundreds of attributes on the forms. There are 23 entities/data classes.
Requirement
The data needs to be kept in an SQL database. Most of the users operate remotely and we cannot rely on them having a connection so we need a solution that maintains a database locally and keeps it in synch with the central database.
Edit: Most of the remote users will only require a subset of the database in their local copy. This is because if they don't have access permissions (as defined and stored in my application) to view other user's records, they will not receive copies of them during synchronisation.
How can I implement this?
Suggested Solution
I could use the Microsoft Entity Framework to create a database and the link between database and code. This would save a lot of manual work as there are hundreds of attributes. I am new to this technology but have done a "hello world" project in it.
For data synch, each entity would have an integer primary key ID. Additionally it would have a secondary ID column which relates to the central database. This secondary column would contain nulls in the central database but would be populated in the local databases.
For synchronisation, I would write code which copies the records and assigns the IDs accordingly. I would need to handle conflicts.
Can anyone foresee any stumbling blocks to doing this? Would I be better off using one of the recommended solutions for data sychronisation, and if so would these work with the entity framework?
Synching data between relational databases is a pain. Your best course of action is probably dependent on: how many users will there be? How probably are conflicts (i.e. that the users will work offline on the same data). Also possibly what kind of manpower do you have (do you have proper DBAs/Sql Server devs standing by to assist with the SQL part, or are you just .NET devs).
I don't envy you this task, it smells of trouble. I'd especially be worried about data corruption and spreading that corruption to all clients rapidly. I'd put extreme countermeasures in place before any data in the remote DB gets updated.
If you predict a lot of conflicts - the same chunk of data gets modified many times by multiple users - I'd probably at least consider creating an additional 'merge' layer to figure out, what is the correct order of operations to perform on the remote db.
One thought - it might be very wrong and crazy, but just the thing that popped in my mind - would be to use JSON Patch on the entities, be it actual domain objects or some configuration containers. All the changes the user makes are recorded as JSON Patch statements, then applied to the local db, and when the user is online - submitted - with timestamps! - to merge provider. The JSON Patch statements from different clients could be grouped by the entity id and sorted by timestamp, and user could get feedback on what other operations from different users are queued - and manually make amends to it. Those grouped statments could be even stored in a files in a git repo. Then at some pre-defined intervals, or triggered manually, the update would be performed on a server-side app and saved to the remote db. After this the users local copies would be refreshed from server.
It's just a rough idea, but I think that you need something with similar capability - it doesn't have to be JSON Patch + Git, you can do it in probably hundreds of ways. I don't thing though, that you will get away with just going through the local/remote db and making updates/merges. Imagine the scenario, where user updates some data (let's say, 20 fields) offline, another makes completely different updates to 20 fields, and 10 of those are common between the users. Now, what should the synch process do? Apply earlier and then latter changes? I'm fairly certain that both users would be furious, because their input was 'atomic' - either everything is changed, or nothing is. The latter 'commit' must be either rejected, or users should have an option to amend it in respect of the new data. That highly depends what your data is, and as I said - what will be number/behaviour of users. Duh, even time-zones become important here - if you have users all in one time-zone you might get away with having predefined times of day when system synchs - but no way you'll convince people with many different business hours that the 'synch session' will happen at e.g. 11 AM, when they are usually giving presentation to management or sth ;)

Must i check the user input in UI and Programming and T-SQL(with check constraint) side at the same time?

Or is the first two check fair enough for securing from hackers?And what about performance issues when i use check constraint in columns?
No, you do not have to check in all three places. However, your application and your database might be better off if you do.
Checking in the UI - especially with scripts or HTML - is very good for interactively showing the user how to correct their input. It saves a round trip to the web server, and is a performance enhancement because the user's CPU is used to run the validation code instead of the server.
Checking in the "programming" (here I believe you are referring to your business domain logic) is important if you ever want to add a new interface to your logic. For instance, if you have a well designed business layer that is consumed by a web application, later on you could also consume the same business layer with a WCF interface and have confidence that you aren't receiving invalid data.
And finally, validation rules in the database are important if you ever want to batch load data directly into the database. Perhaps a partner business or client sends you a text file that you need to load. Having the rules in place in the database will keep you from corrupting your data if the load routine has a defect.

c# update single db field or whole object?

This might seem like an odd question, but it's been bugging me for a while now. Given that i'm not a hugely experienced programmer, and i'm the sole application/c# developer in the company, I felt the need to sanity check this with you guys.
We have created an application that handles shipping information internally within our company, this application works with a central DB at our IT office.
We've recently switch DB from mysql to mssql and during the transition we decided to forgo the webservices previously used and connect directly to the DB using Application Role, for added security we only allow access to Store Procedures and all CRUD operations are handle via these.
However we currently have stored procedures for updating every field in one of our objects, which is quite a few stored procedures, and as such quite a bit of work on the client for the DataRepository (needing separate code to call the procedure and pass the right params for each procedure).
So i'm thinking, would it be better to simply update the entire object (in this case, an object represents a table, for example shipments) given that a lot of that data would be change one field at a time after initial insert, and that we are trying to keep the network usage down, as some of the clients will run with limited internet.
Whats the standard practice for this kind of thing? or is there a method that I've overlooked?
I would say that updating all the columns for the entire row is a much more common practice.
If you have a proc for each field, and you change multiple fields in one update, you will have to wrap all the stored procedure calls into a single transaction to avoid the database getting into an inconsistent state. You also have to detect which field changed (which means you need to compare the old row to the new row).
Look into using an Object Relational Mapper (ORM) like Entity Framework for these kinds of operations. You will find that there is not general consensus on whether ORMs are a great solution for all data access needs, but it's hard to argue that they solve the problem of CRUD pretty comprehensively.
Connecting directly to the DB over the internet isn't something I'd switch to in a hurry.
"we decided to forgo the webservices previously used and connect directly to the DB"
What made you decide this?
If you are intent on this model, then a single SPROC to update an entire row would be advantageous over one per column. I have a similar application which uses SPROCs in this way, however the data from the client comes in via XML, then a middleware application on our server end deals with updating the DB.
The standard practice is not to connect to DB over the internet.
Even for small app, this should be the overall model:
Client app -> over internet -> server-side app (WCF WebService) -> LAN/localhost -> SQL
DB
Benefits:
your client app would not even know that you have switched DB implementations.
It would not know anything about DB security, etc.
you, as a programmer, would not be thinking in terms of "rows" and "columns" on client side. Those would be objects and fields.
you would be able to use different protocols: send only single field updates between client app and server app, but update entire rows between server app and DB.
Now, given your situation, updating entire row (the entire object) is definitely more of a standard practice than updating a single column.
It's better to only update what you change if you know what you change (if using an ORM like entity Framework for example), but if you're going down the stored proc route then yes definately update everything in a row at once that's way granular enough.
You should take the switch as an oportunity to change over to LINQ to entities however if you're already in a big change and ditch stored procedures in the process whenever possible

Software Design - Three-tier architecture

Layer 3 - Interface
Layer 2 - Business logic (get input from user, check if valid, send to database function)
Layer 1 - Database (creates, updates, gets records etc)
A user can add many contact phone numbers, if it is the first phone number added the system will automatically set that phone number to primary, and there after the user can change his primary phone number on his own.
When the first phone number record is created in the database, which layer is responsible to check if the phone number needs to be set to primary or not?
Business layer. The database should be storing data, not making decisions. The interface just interacts with the user. The business layer makes the rules.
Your business logic should handle it when the phone number gets added to the user. You can verify it works by providing unit/integration tests for it.
I guess it depends what you're aiming for. As it is your business layer should handle phone being validated/set as primary. Database would still need to store that information in some way I think.
However in certain cases like security verification you'll need to do some checks at Interface, Logic and Database level. Yes it is redundant but I think you'll want to guarantee that hackers that break your interface or logic, can't go around messing with your underlying data.
The data layer in an N-tier application isn't really supposed to do anything other than to put values in and get values in. Think of it as an persistence service.
Everything else goes into what's known as the business and/or logic layer except for UI code (you're supposed to keep those things separate in following something like MVP, MVC or MVVM).
Though this simple problem actually raises a issue with transactions, your data model should eventually prevent this, but if you cannot complete the operation as an atomic unit there always the chance that two phone numbers are put at the same time and they both end up as primary (depending on the latency between the application and database). To gracefully handling these situations you need at least think about error recovery (error handling) that propagates these problems in a meaningful manner. Don't just crash your application.
Just to add to the above answers, you might want to consider persisting input regardless of validity. Can add a bit more development (especially if you need to clean the data) but it can be worth it depending on your application

Categories

Resources