Layer 3 - Interface
Layer 2 - Business logic (get input from user, check if valid, send to database function)
Layer 1 - Database (creates, updates, gets records etc)
A user can add many contact phone numbers, if it is the first phone number added the system will automatically set that phone number to primary, and there after the user can change his primary phone number on his own.
When the first phone number record is created in the database, which layer is responsible to check if the phone number needs to be set to primary or not?
Business layer. The database should be storing data, not making decisions. The interface just interacts with the user. The business layer makes the rules.
Your business logic should handle it when the phone number gets added to the user. You can verify it works by providing unit/integration tests for it.
I guess it depends what you're aiming for. As it is your business layer should handle phone being validated/set as primary. Database would still need to store that information in some way I think.
However in certain cases like security verification you'll need to do some checks at Interface, Logic and Database level. Yes it is redundant but I think you'll want to guarantee that hackers that break your interface or logic, can't go around messing with your underlying data.
The data layer in an N-tier application isn't really supposed to do anything other than to put values in and get values in. Think of it as an persistence service.
Everything else goes into what's known as the business and/or logic layer except for UI code (you're supposed to keep those things separate in following something like MVP, MVC or MVVM).
Though this simple problem actually raises a issue with transactions, your data model should eventually prevent this, but if you cannot complete the operation as an atomic unit there always the chance that two phone numbers are put at the same time and they both end up as primary (depending on the latency between the application and database). To gracefully handling these situations you need at least think about error recovery (error handling) that propagates these problems in a meaningful manner. Don't just crash your application.
Just to add to the above answers, you might want to consider persisting input regardless of validity. Can add a bit more development (especially if you need to clean the data) but it can be worth it depending on your application
Related
What should we do in case when we have UI that's not task based with tasks corresponding to our entity methods which, in turn, correspond to ubiquitous language?
For example, lets say we have a domain model for WorkItem that has properties: StartDate, DueDate, AssignedToEmployeeId, WorkItemType, Title, Description, CreatedbyEmployeeId.
Now, some things can change with the WorkItem and broken down, it boils to methods like:
WorkItem.ReassignToAnotherEmployee(string employeeId)
WorkItem.Postpone(DateTime newDateTime)
WorkItem.ExtendDueDate(DateTime newDueDate)
WorkItem.Describe(string description)
But on our UI side there is just one form with fields corresponding to our properties and a single Save button. So, CRUD UI. Obviously, that leads to have a single CRUD REST API endpoint like PUT domain.com/workitems/{id}.
Question is: how to handle requests that come to this endpoint from the domain model perspective?
OPTION 1
Have CRUD like method WorkItem.Update(...)? (this, obviously, defeats the whole purpose of ubiquitous language and DDD)
OPTION 2
Application service that is called by endpoint controller have method WorkItemsService.Update(...) but within that service we call each one of the domain models methods that correspond to ubiquitous language? something like:
public class WorkItemService {
...
public Update(params) {
WorkItem item = _workItemRepository.get(params.workItemId);
//i am leaving out check for which properties actually changed
//as its not crucial for this example
item.ReassignToAnotherEmployee(params.employeeId);
item.Postpone(params.newDateTime);
item.ExtendDueDate(params.newDueDate);
item.Describe(params.description);
_workItemRepository.save(item);
}
}
Or maybe some third option?
Is there some rule of thumb here?
[UPDATE]
To be clear, question can be rephrased in a way: Should CRUD-like WorkItem.Update() ever become a part of our model even if our domain experts express it in a way we want to be able update a WorkItem or should we always avoid it and go for what does "update" actually mean for the business?
Is your domain/sub-domain inherently CRUD?
"if our domain experts express it in a way we want to be able update a
WorkItem"
If your sub-domain aligns well with CRUD you shouldn't try to force a domain model. CRUD is not an anti-pattern and can actually be the perfect fit for certain sub-domains. CRUD becomes problematic when business experts are expressing rich business processes that are wrongly translated to CRUD UIs & backends by developers, leading to code/UL misalignment.
Note that business processes can also be expensive to discover & model explicitly. Sometimes (e.g. lack of resources) it may be acceptable to let those live in the heads of domain experts. They will drive a simple CRUD UI from paper-based processes as opposed to having the system guide them. CRUD may be perfectly fine here since although processes are complex, we aren't trying to model them in the system which remains simple.
I can't tell whether or not your domain is inherently CRUD, but I just wanted to point out that if it is, then embrace it and go for simpler business logic patterns (Active Record, Transaction Script, etc.). If you find yourself constantly wanting to map every bit of data with a single method call then you may be in a CRUD domain.
Isolate corruption
If you settle that a domain model will benefit your model, then you should stop corruption from spreading through the system as early as you can. This is done with an anti-corruption layer which in your case would be responsible for interpreting CRUD calls and transforming them into more meaningful business processes.
The anti-corruption layer should sit between the parts of the system you want to protect and the legacy/misbehaving/etc part. That would be option #2. In this case the anti-corruption code will most likely have to compare the current state with the new state to try and figure out what changes were done and how to correlate these to more explicit business processes.
Like you said, option 1 is pretty much against the ruleset. Additional offering a generic update is no good to the clients of your domain enitity.
I would go with a 2ish option: having an Application level service but reflecting the UL to it. Your controller would need to call a meaningful application service method with a meaningful parameter/command that changes the state of a Domain model.
I always try to think from the view of a client of my Service/Domain Model Code. As this client i want to know exactly what i call. Having a CRUD like Update is counter intiuitiv and doesn't help you to follow the UL and is more confusing to the clients. They would need to know the code behind that update method to know what they are changing.
To your Update: no don't include a generic update (atleast not with the name Update) always reflect business rules/processes. A client of your code would never know what i does.
In terms if this is a specific business process that gets triggered from a specific controller api endpoint you can call it that way. Let's say your Update is actually the business process DoAWorkItemReassignAndPostponeDueToEmployeeWentOnVacation() then you could bulk this operation but don't go with the generic Update. Always reflect UL.
Or is the first two check fair enough for securing from hackers?And what about performance issues when i use check constraint in columns?
No, you do not have to check in all three places. However, your application and your database might be better off if you do.
Checking in the UI - especially with scripts or HTML - is very good for interactively showing the user how to correct their input. It saves a round trip to the web server, and is a performance enhancement because the user's CPU is used to run the validation code instead of the server.
Checking in the "programming" (here I believe you are referring to your business domain logic) is important if you ever want to add a new interface to your logic. For instance, if you have a well designed business layer that is consumed by a web application, later on you could also consume the same business layer with a WCF interface and have confidence that you aren't receiving invalid data.
And finally, validation rules in the database are important if you ever want to batch load data directly into the database. Perhaps a partner business or client sends you a text file that you need to load. Having the rules in place in the database will keep you from corrupting your data if the load routine has a defect.
Ok guys, another my question is seems to be very widely asked and generic. For instance, I have some accounts table in my db, let say it would be accounts table. On client (desktop winforms app) I have appropriate functionality to add new account. Let say in UI it's a couple of textboxes and one button.
Another one requirement is account uniqueness. So I can't add two same accounts. My question is should I check this account existence on client (making some query and looking at result) or make a stored procedure for adding new account and check account existence there. As it for me, it's better to make just a stored proc, there I can make any needed checks and after all checks add new account. But there is pros and cons of that way. For example, it will be very difficult to manage languagw of messages that stored proc should produce.
POST EDIT
I already have any database constraints, etc. The issue is how to process situation there user is being add an existence account.
POST EDIT 2
The account uniqueness is exposed as just a simple tiny example of business logic. My question is more abour handling complicated business logic on that accounts domain.
So, how can I manage this misunderstanding?
I belive that my question is basic and has proven solution. My tools are C#, .NET Framework 2.0. Thanks in advance, guys!
If the application is to be multi-user ( i.e. not just a single desktop app with a single user, but a centralised DB with the app acting as clients maybe on many workstations), then it is not safe to rely on the client (app) to check for such as uniqueness, existance, free numbers etc as there is a distinct possibility of change happening between calls (unless read locking is used, but this often become more of an issue than a help!).
There is the ability of course to precheck and then recheck (pre at app level, re at DB), but of course this would give extra DB traffic, so depends on whether it is a problem for you.
When I write SPROCs that will return to an app, I always use the same framework - I include parameters for a return code and message and always populate them. Then I can use standard routines to call them and even add in the parameters automatically. I can then either display the message directly on failure, or use the return code to localize it as required (or automate a response). I know some DBs (like SQL Svr) will return Return_Code parameters, but I impliment my own so I can leave inbuilt ones for serious system based errors and unexpected failures. Also allows me to have my own numbering systems for return codes (i.e. grouping them to match Enums in the code and/or grouping by severity)
On web apps I have also used a different concept at times. For example, sometimes a request is made for a new account but multiple pages are required (profile for example). Here I often use a header table that generates a hidden user ID against the requested unique username, a timestamp and someway of recognising them (IP Address etc). If after x hours it is not used, the header table deletes the row freeing up the number (depending on DB the number may never become useable again - this doesn;t really matter as it is just used to keep the user data unique until application is submitted) and the username. If completed correctly, then the records are simply copied across to the proper active tables.
//Edit - To Add:
Good point. But account uniqueness is just a very tiny simple sample.
What about more complex requirements for accounts in business logic?
For example, if I implement in just in client code (in winforms app) I
will go ok, but if I want another (say console version of my app or a
website) kind of my app work with this accounts I should do all this
logic again in new app! So, I'm looking some method to hold data right
from two sides (server db site and client side). – kseen yesterday
If the requirement is ever for mutiuse, then it is best to separate it. Putting it into a separate Class Library Project allows the DLL to be used by your WinForm, Console program, Service, etc. Although I would still prefer rock-face validation (DB level) as it is closest point in time to any action and least likely to be gazzumped.
The usual way is to separate into three projects. A display layer [DL] (your winform project/console/Service/etc) and Business Application Layer [BAL] (which holds all the business rules and calls to the DAL - it knows nothing about the diplay medium nor about the database thechnology) and finally the Data Access Layer [DAL] (this has all the database calls - this can be very basic with a method for insert/update/select/delete at SQL and SPROC level and maybe some classes for passing data back and forth). The DL references only the BAL which references the DAL. The DAL can be swapped for each technology (say change from SQL Server to MySQL) without affecting the rest of the application and business rules can be changed and set in the BAL with no affect to the DAL (DL may be affected if new methods are added or display requirement change due to data change etc). This framework can then be used again and again across all your apps and is easy to make quite drastic changes to (like DB topology).
This type of logic is usually kept in code for easier maintenance (which includes testing). However, if this is just a personal throwaway application, do what is most simple for you. If it's something that is going to grow, it's better to put things practices in place now, to ease maintenance/change later.
I'd have a AccountsRepository class (for example) with a AddAcount method that did the insert/called the stored procedure. Using database constraints (as HaLaBi mentioned), it would fail on trying to insert a duplicate. You would then determine how to handle this issue (passing a message back to the ui that it couldn't add) in the code. This would allow you to put tests around all of this. The only change you made in the db is to add the constraint.
Just my 2 cents on a Thrusday morning (before my cup of green tea). :)
i think the answer - like many - is 'it depends'
for sure it is a good thing to push logic as deeply as possible towards the database. This prevent bad data no matter how the user tries to get it in there.
this, in simple terms, results in applications that TRY - FAIL - RECOVER when attempting an invalid transaction. you need to check each call(stored proc, or triggered insert etc) and IF something bad happens, recover from that condition. Usually something like tell the user an issue occurred, reset the form or something, and let them try again.
i think at a minimum, this needs to happen.
but, in addition, to make a really nice experience for the user, the app should also preemptively check on certain data conditions ahead of time, and simply prevent the user from making bad inserts in the first place.
this is of course harder, and sometimes means double coding of business rules (one in the app, and one in the DB constraints) but it can make for a dramatically better user experience.
The solution is more of being methodical than technical:
Implement - "Defensive Programming" & "Design by Contract"
If the chances of a business-rule being changed over time is very less, then apply the constraint at database-level
Create a "validation or rules & aggregation layer (or class)" that will manage such conditions/constraints for entity and/or specific property
A much smarter way to do this would be to make a user-control for the entity and/or specific property (in your case the "Account-Code"), which would internally use the "validation or rules & aggregation layer (or class)"
This will allow you to ensure a "systematic-way-of-development" or a more "scalable & maintainable" application-architecture
If your application is a website then along with placing the validation on the client-side it is always better to have validation even in the business-layer or C# code as well
When ever a validation would fail you could implement & use a "custom-error-message" library, to ensure message-content is standard across the application
If the errors are raised from database itself (i.e., from stored-procedures), you could use the same "custom-error-message" class for converting the SQL Exception to the fixed or standardized message format
I know that this is all a bit too much, but is will always good for future.
Hope this helps.
As you should not depend on a specific Storage Provider (DB [mysql, mssql, ...], flat file, xml, binary, cloud, ...) in a professional project all constraint should be checked in the business logic (model).
The model shouldn't have to know anything about the storage provider.
Uncle Bob said something about architecture and databases: http://blog.8thlight.com/uncle-bob/2011/11/22/Clean-Architecture.html
I am creating a WPF application that uses a WCF service to interact with the data source. I use DI for both the client and the WCF server to ensure decoupled code, but I am unsure how to handle the data transfer from backend to user interface.
To keep the layers separate data is currently transferred from the database to the UI through several mapping steps. On the server side data entities are mapped to domain objects which again are mapped to service data contracts. On the client side WCF proxy classes are mapped to viewmodels.
Some developers at work claim that this "copying" of data between seemingly identical classes creates a maintenance problem because so many classes must be updated when a change is introduced. Instead they say you should use shared classes across the layers since we control both the client application and the WCF service. I too worry about the amount of work involved and see a potential performance penalty, but on the other hand using a shared class across the layers/abstractions might create tight coupling the way I see it. What is the best approach?
Using DTOs as business objects is not the best decision you can make.
From my experience I can say that usually when your objects are identical for all the layers then there is probably a problem with the architecture somewhere.
In a real business scenario it is quite unlikely that a business logic on a server and a business logic on a client have the same context and operate with the same objects. And if they have exactly the same structure with the database... hmmm... sounds like a data-driven application.
But if it is a data-driven application when clients access some data, modify it and save it back, then probably you don't really need this complicated layering? It sounds simple so let's keep it simple. If it is a data-driven application, why not just create a WCF DataServices context on top of your database and let it do all the dirty work for you when you just access your data over WCF without even thinking about DTOs, mappings, etc.
If it is not a data-driven application, then you probably have some complicated business logic on your server side, and this business logic usually operates with objects that make sense only its context. It just doesn't make sense to push these object all the way through to the UI.
Instead, the UI will probably send commands to the server in order to ask the system to do something. For example, it will send a "DisableAccount(id=123)" command instead of loading AccountDTO, changing its IsEnabled flag to false and pushing it back.
If there is a business logic, then it probably will be triggered by such command from the client who does not need to know how to disable accounts or how to do other things. It just knows and can command the system to do something.
So in this scenario client (the UI) just does not need the same object that server has. It may need some data to display to user, but it definitely will be in a format that make sense for the client's view, not for the business logic. It will probably contain some denormalized data, combined somehow.
Say, User for UI is not a DTO mapped to the Users table. It is another DTO, containing users data and statistics from different tables, processed somehow. Client does not need to know the internal structure of server's data storage so there is no need to expose it. Get the relevant data and send the appropriate commands, that's it.
Saying all this I should underline that it is NOT a binary choice you make. For simple features you may use a simple approach, for the features where having business logic make sense you may do the other things.
You do not have to choose one for everything. So you do not have to always create 3 similar objects just because it is "The Way" or always pass entities all the way through to the UI.
But what you will have to do is to clearly separate contexts and to define where which approach is going to be used.
In 80% you will probably end up with something simple (like WCF DataServices), and you don't need to do anything, and it is fine as in a lot of operations you just want to change the data.
But in other 20% (which is the "core" of your application) where the real business logic lives - here you may want this kind of separation not only for objects, but also for responsibilities between your layers.
All that mapping indeed creates a maintenance burden. Whether or not it's warranted depends on what you are building, and how complex the business logic is.
However, it's very important to realize that once you start sharing data structures across layers and tiers, the architecture is no longer decoupled. If you do that, you'd essentially be building a monolithic application. Don't get me wrong: there's nothing wrong with building a monolithic application if all you're doing is a glorified CRUD application, but it's essential to make that decision explicitly.
There's at least these alternatives:
Maintain strict layering. The mapping cost remains, but the code is decoupled.
Build a monolithic application. Collapse everything you can collapse. Keep it as simple as possible. It's going to be tightly coupled, but it just may become so simple that it doesn't matter.
Do something radically different, like building a CQRS application or a SOA mashup.
Personally, I prefer the third option these days.
I see nothing sacred about layers. Having layer-specific versions of each and every entity in the model would increase the number of classes a great deal. It's unnecessary, in my view. It violates the DRY principle: why keep repeating yourself?
What does layer purity buy you?
So I'd say the best approach is to pass those model entities around without fear.
Say you have 3 layers: UI, Business, Data.
Is that a sign of poor design if Business layer needs to access Sessions? Something about it doesn't feel right. Are there any guidelines to follow specifically tailored to web app?
I use c# 2.0 .net
No. If you had a "Controller" layer, you should access it there. Get what you need from the Session and hand it off to your business layer.
Sigh.
The broad consensus is going to be no; the business layer and the controller/web layer ought to be maintained differently, because they are separate concerns.
The fact is, you seem to label this as a "purity vs. reality" question which is incredibly short-sighted and slightly obnoxious. It also defies the point of asking the question; if you're not going to consider the opinions being presented, then why solicit them?
It's true that separating things out a little more carefully up front requires more up-front effort, more time, and ultimately may cost a little more. It's also true that you may not be able to perceive any immediate benefit from doing so. However, a wealth of sob stories shared among a huge number of programmers for several decades suggests that, where possible, your so-called "purity" reduces the pain when, five years down the line; gosh; you really have to knuckle down and do a bit of refactoring, and it's not remotely pleasant because of all the cracks through which your responsibilities are seeping.
A slightly better way of envisioning the layers for a web application might be to consider presentation, interaction, business rules and data; from top to bottom. Your data is the database, data access, etc. and the business rules enforce any additional constraints on that data, handle valiation, calculation, etc. Interaction then branches between the presentation layer (which is basically your user interface) and the business logic, performing the use cases that drive your application.
Up to this point, the user interface is all immaterial; it doesn't matter if the user's entering, say, customer data in a command-line application, or navigating some multi-page web form with data stored in the session. Let's say you pick the latter; stick a web front-end on it. Now what you're concerned with is writing relatively simple code to handle retrieving the requested data and presenting it to the user. The point is, your web application; the front-end, that is your entire user interface; sessions and all. Only at the point where you're ready to say, "hey, let's stick that customer data into the database" do you go and invoke those oh-so-lovingly-crafted service layers, passing each bit of information that your web application's got stashed; the user input, the name of the user making the change; all that crap. And your service layer deals with it. Or, alternatively, bitches 'cause you forgot a required field.
Because you've cleanly separated things out, the guts of your application can, as others have suggested, be remodelled (or "borrowed") to use in any other application, and the service layer remains, stateless, clean, and ready to handle things. And it does your validation, and so your validation is consistent everywhere. But it doesn't know who's logged into the web frontend, or the console application, or the fancy rich client application running on a terminal, and it doesn't care, because that detail is important only to those applications.
Need to add a new validation rule? No problem; make the service layer do the validation, and handle the user interface concerns as necessary higher up in the chain. Need to change the way something's calculated? Change that at the business layer. Nothing else needs to be affected.
Smells funny to me. Maybe you need a presentation layer to manage the session and any other stateful information?
I think the Business logic should not be tied to a presentation choice, and if Session lives in that tier it will be tied.
I consider unnecessary usage of Session to be a code smell in general, often querystrings, cookies and viewstate are lighterweight and have better 'scope'
As to Sessions role in a business tier, it depends on what architectural guru you're reading at the moment. If the business logic tier is a place for code independent of UI, then Session isn't a thing to introduce into the business tier.
For example, in a Console app, an ASP.NET web App, a Windows Service, and a Windows Forms app--only ASP.NET has Session.
That said, being able to support mutliple UIs is a highly overrated feature and it doesn't take perfect foresight to estimate if you will ever port your app to a different user interface. If you're highly confident that your logic will only run in an ASP.NET app from now and forever, then it is okay.
An exception would be unit testing. nUnit test runners constitute a different UI and it is hard to simulate Request, Response, Session, Application, etc.
My feeling is that the business layer is the only place to put the session data access because it really is part of your logic. If you bake it into the presentation layer, then if you change where that data is stored (lets says, you no longer want to use session state), then making a change is more difficult.
What I would do is for all the data that is is session state, I'd create a class to encapsulate the retrieval of that data. This will allow for future changes.
Edit: The UI should only be used to do the following:
1. Call to the business layer. 2. Interact with the user (messages and events). 3. Manipulate visual object on the screen.
I have heard people consider the session data access as part of the data layer, and this is semantic, and depends on what you consider the data layer.
If you keep the three layers you listed, the 'Business' layer would likely be the most suited to dealing with Session objects.
Since a session has more to do with the actual framework tying the application together than with any actual business logic, you may want to create a control layer that translates data from the Session object to business data inputs to the business layer.
I think it depends on the usage, but yes, we access session from our business layer all the time. There's a "purity vs. reality" argument here too.
For example, in our application we have a database per client, but one code base. So we have the client info in the session for each user. Sure, we could then pull that data out of session in the web layer and then pass it down to the business layer. Of course, the business layer doesn't even care about it, but it's needed by the data layer to connect to the correct database. So then the business layer would need to pass it down to the data layer. Seems like lots of parameter passing for no good business reason. In our case our data layer checks the session object for the connection string and runs from there. We coded around the issue of not having session if it's not a web app (and we have windows services and .exe helper apps) as follows:
protected virtual string GetConnectionString()
{
string connectionString;
string connectionStringSource;
//In app.config?
if (ConfigurationManager.AppSettings[_ConnectionStringName] != null &&
ConfigurationManager.AppSettings[_ConnectionStringName] != "")
{
connectionString = ConfigurationManager.AppSettings[_ConnectionStringName];
connectionStringSource = "Config settings";
}
//Nope? Check Session
else if (null != HttpContext.Current && null != HttpContext.Current.Session &&
null != HttpContext.Current.Session[_ConnectionStringName])
{
connectionString = (string)HttpContext.Current.Session[_ConnectionStringName];
connectionStringSource = "Session";
}
//Nope? Check Thread
else if (null != System.Threading.Thread.GetData(
System.Threading.Thread.GetNamedDataSlot(_ConnectionStringName)))
{
connectionString = (string)System.Threading.Thread.GetData(
System.Threading.Thread.GetNamedDataSlot(_ConnectionStringName));
connectionStringSource = "ThreadLocal";
}
else
{
throw new ApplicationException("Can't find a connection string");
}
if (debugLogging)
log.DebugFormat("Connection String '{0}' found in {1}", connectionString,
connectionStringSource);
return connectionString;
}
Accessing the session from the business layer is definitely a code smell.
If you need to 'switch' the business layer as stated, your presentation layer or controller is all that need to worry about what the session variable is.
For example, say you want to use one set of objects for one session variable and another set for another variable. Your controller could call a service layer and pass in the variable. The service layer would then return the appropriate business object.
Your business layer has no business knowing anything about the session in the same sense that your presentation layer has no business knowing about how you are connecting to the DB.
The session object is tied to a specific UI implementation, so having your business layer died to your session is a smell.
Plus, you probably should be able to unit-test your business layer - how are you going to do that if there are dependencies on a session?