Linq to SQL ORM 3-layer question - c#

I am designing this HR System (desktop-based) for a mid-size organization. The thing is I have all the tables designed and was planning on using the O/RM in VS2008 to generate the entity classes (this is the first time I work with OR/M; in fact, this is my first "big" project.) I wanted to make the app with 3 layers (one of the programmers of the company suggested not 3 but 4 or 5 layers) but after reading quite a lot of blog entries and a lot of questions here I've realized that is not quite easy to do that with LINQ to SQL because of how the datacontext works and how difficult it is to pass objects between layers using LINQ to SQL.
Probably I'll just use the entity classes generated by the VS2008 ORM and add any validation and bussines logic in partial classes. But that would be 2 layers, or not? The app will be used by like 10 users, so I don't think the 2 layer approach is a big issue for now.
In the future, a web-based front-end will be developed so candidates can apply to jobs online. I want to develop it as scalable as possible. But the truth is I don't have a lot of time to waste to make a decision, times running up hehe.
Having said all that, should I just use the entities generated by the VS2008 ORM?
So any suggestion or idea would be greatly appreciated. Thanks.

You're chewing over quite a lot with your line of questioning here. (Is there a concrete question hidden in there somewhere?)
With layers, I assume you mean physical boundaries, i.e. application, app/SOA/WCF server, data layer that lives on the SOA server, and a database somewhere.
Designing for the future might seem like a good idea, but DO make sure that there WILL be a need for all those layers somewhere down the line. Essentially, you do not need a WCF/SOA based approach if you're not exposing your application over the internet at some point. A web frontend can solve the same problem in many cases.
I'm not saying you will not need those layers at all, but you might not. If you really do, seams are your friend. You need to make "cut points" where you can define your boundaries. I commonly use the repository pattern to diversify data access methodologies, and use plain objects (POCO) and interfaces that are persisted via technologies such as NHibernate. Using POCOs also makes it MUCH easier to transfer those objects over the wire at a later point, either standalone or part of messages.
Creating service interfaces that are called can solidify your boundaries. When you are ready to move cross-machine/physical boundaries, you simply create your boundaries in the service implementations.

It sure sounds like a dangerous way to go - creating the tables first, then domain and finally GUI.
I must admit I am no expert on ORM expert but the generated classes I´ve seen looks more like dataobjects than classes. I would say you need another layer to stop all logic to end up in the GUI ).

Related

Designing the database access layer

I consider myself quite amateur when it comes to designing a system's architecture, and I currently find myself in the process of doing just that.
Particularly, I am trying to come up with an efficient and maintainable way to re-implement all classes that have methods/functions that query the database to read data, then send it upstream for another layer to process it, and finally receive the processed data to write it back to the database.
Surely this generic problem has already been solved. I intend to follow a DDD approach, so that the methods accessing the database are part of an "Infrastructure" layer. Is there an optimal way of designing a system (or structure of classes) to accomplish this? Should I have just one gateway to read/write from the database that all classes should refer to, or should each components have its own way of communicating with the database? Is there a standard approach to do this?
I am mindful that the question might be a bit broad, but for the experts out there surely you have gone through this and are able to help.
There are following items to be considered:
DAO pattern - Create DAO layer leveraging DAO objects (for each domain object) where upper layer (such as service layer) can make use of it.
Architecture - If you are thinking about micro-services architecture, then UI, Service, DB access (DAO) & DB - all these will be single deployable unit. Hence, design pattern will be aligned to chosen architectural approach.
API Gateway - An API gateway (aligned with architectural approach). Think about functional use-cases while designing APIs rather than just providing CRUD operations or technology specific APIs.
As an accepted practice, your presentation layer, business logic and data access(some call it back end layer) should be fairly separated. Microst's MVC concept is just a sensible example of dedicated effort to achieve this. AngularJs from Google is another example of MVVM that is to write clean code on client side as opposed to Asp.net MVC server side. So what should be the best practices is clearly established here. As for your question, I would say to design something qualify for highly optimized designing paradigm does not require a certain way but an experience of knowing somany ways and wisdom to choose single or multiple ways or even mixing them to suit your needs. As for your question about gateway to data access, let me put it this way, maintain multiple connections is a very resource consuming job so ideally a single static instance data class in which a single connection is maintained is an appropriate approach and all the data serving, manipulating methods should be put in there. In web development, we are familiar with webservices exposing objects with operation / message contracts. Separation and encapsulation is the key here. But be sure nothing is perfect, though Asp.net MVC boasts the best methodology of separating all these layers, they have Razor to contradict it. But it's necessary. Hell with being paranoid with tightly coupled back and front end or spaghetti code. It all makes sense when it suits your needs. The key here only experience can teach us the optimize way or ways to do something. That's the returned value of my answer!

How to implement a maintainable and loosly coupled application using DDD and SRP?

The reason for asking this question is that I've been wondering on how to stitch all these different concepts together. There are many examples and discussions on i.e. DDD, Dependency Injection, CQRS, SOA, MVC but not so many examples on how to put them all together in a flexible way.
My goal:
Develop modules that with little or no modification can stand on their own
Changing or reworking the UI should be as easy as possible (i.e. the UI should do as little as possible, and be "stupid"
Use documented patterns and principles
To make it easier to ask a concrete question, the main arcitecture now looks like this:
The example shows how to add a note to an employee. Employee Management is one bounded context. Employee has several properties, among those an ICollection<Note>.
The bound context is in my understanding the logic place to seperate code. Each BC is a module. Most of the time I find each of them can warrant their own UI if needed (i.e. some modules might be made available for Windows phone).
The Domain holds all business logic.
The infrastructure holds repository implementation, and services to send mail, save files and utilities that does not belong in the domain. I'm thinking of making some of the common service feautures that I have to use in several domains (like send e-mail) as a sort of an API that I could reference to save some code implementing the same things across several BC's.
The query layer holds all Querys except GetById that I need in the repository to fetch an object. The query layer can query other persistence instances, and will probably need to change some for each UI.
The Wcf or Web Api is kind of my Application layer, it might belong in infrastrucure and not on the outside. This service also sets up the dependencies, so all UI need to do is to ask for information and send commands.
The process starts with the blue arrows. Read the model since that has most of the information.
In step 1 the EmployeeDto in this example is just some of employee properties to show the user information about the employee they need to make a note on (like a note about new experience or something like that).
So, the questions are:
Does implementing a layered arcitecture like this really involve so much mapping, or have I missed something?
Is it recommended (or even smart) to use a Wcf service to run the main logic like this (it practically is my Application Service)
Are there alternatives to Wcf without having my domain objects in my UI layer?
Is there anything wrong with this implementation. Any fall pits to look out for?
Do you have any good examples to recommend looking at that can help me to understand how all these concepts are supposed to work together.
Update:
I've read through most of the articles now (quite a bit of reading) except for the paid book (requires a bit more time to do). All of them are very good pointers, and the way of thinking of the Wcf of more as an adapter seems to be a good answer to question 2. JGauffins work on his framework is also very interesting if I'm planning to go the that route.
However, as mentioned in some of the comments beneath I feel some of the examples tends towards recommending or implementing event and/or command sourcing, message buses and so on. To me it is overkill to plan for that level of scaling right now. As many business applications this is a "large" (in terms of an internal application, think max a few thousand) number of users working on a large set of data, not a highly collaborative domain in the sense of needing to implement event and command queues often assosiated with CQRS to cope with that.
Based on the answers below, the approach I'll start with will be based on the model above and the answers like this:
I'll just have to cope with mapping. Thoe pros outweighs the cons.
I'll pull application services back to the infrastructure and
consider Wcf as an "adapter"
I'll use command objects and send to application service. Not
polluting my domain with domain objects.
To keep complexity down I try to manage without event/command
sourcing, message buses etc for now.
In addition I just wanted to link to this blog post by Udi Dahan about CQRS, I think things like this keeps complexity down unless they are really needed.
There is a trade-off between mapping and layers. One reason certain mappings exist is because appropriate abstractions aren't available or feasible. As a result, it is often easier to just explicitly map between layers than trying to implement a framework that infers the mappings, but I digress; this hinges on a philosophical discussion of the issue.
The WCF or WebAPI service should be very thin. Think of it as an adapter in a hexagonal architecture. It should delegate everything to an application service. There is conflation of the term service which causes confusion. Overall, the goal of WCF or WebAPI is to "adapt" your domain to a specific technology such as HTTP. WCF can be thought of as implementing an open host service in DDD lingo.
You mentioned WebAPI which is an alternative if you want HTTP. Most importantly, be aware of the role of this adapting layer. As you state, it is best to have the UI depend on DTOs and generally the contract of a service implemented with WCF or WebAPI or anything else. This keeps things simple and allows you to vary implementation of your domain without affecting consumers of open host services.
You should always be on the lookout for needless complexity. Layering is a trade-off and sometimes it can be overkill. For example, in an app that is primarily CRUD, there is no need to layer this much. Also, as stated above, don't think of WCF services as being application services. Instead, think of them as adapters between a transport technology and application services. In turn, think of application services as being a facade over you domain, regardless of whether your domain is implemented with DDD or a transaction script approach.
What really helped me understand is the referenced article on the hexagonal architecture. This way, you can view your domain as being at the core and you layer things around it, adapting your domain to infrastructure and services. What you have seems to already follow these principles. A great, in-depth resource for all of this is Implementing Domain-Driven Design by Vaughn Vernon, specifically the chapter on architecture.
Does implementing a layered architecture like this really involve so much mapping, or have I missed something?
Yes. The thing is that it's not the same object. It's different representations of the same object, but specialized for each use case. A view model contains logic to update the GUI, a DTO is specialized for transfer (might get normalized to ease transfer). etc. etc. They might look the same, but they really aren't.
You could of course try to put all adaptations into a single class, but that would not be very fun to work with when your application grows.
Is it recommended (or even smart) to use a Wcf service to run the main logic like this (it practically is my Application Service)
You need some kind of networking layer. I wouldn't let all client applications touch my database. It would create a maintenance nightmare if you mess with the database schema (if some of the clients still run the old version).
By using a server it's much easier to maintain version differences.
Do note the a WCF service definition should be treated as constant once being used. Any changes should be defined in a new interface (for instance MyService2).
Are there alternatives to Wcf without having my domain objects in my UI layer?
You could take a look at my framework. Start post: http://blog.gauffin.org/2012/10/writing-decoupled-and-scalable-applications-2/
Is there anything wrong with this implementation.
Not that I can see. Looks like you have a pretty good grasp of the concepts and how they should be used.
Any fall pits to look out for?
Don't try to be lazy with the queries and commands. Don't make them a bit more generic to fit several use cases. It will come back and bite you when the application grows. Smaller classes is easier to maintain.
Do you have any good examples to recommend looking at that can help me to understand how all these concepts are supposed to work together.
The my linked blog post and all other articles in that series.

Convert an ASP.NET application to a Silverlight application

I'm developing an three layer ASP.NET application with C# and Visual Studio 2008 SP1. I'm using WebForms.
I'm wondering to convert that application to a Silverlight application. Maybe I can reuse a lot of code of ASP.NET layer.
What do you think about?
Assuming you have the typical presentation, business logic, and data layers, and also assuming that you have separated your code diligently into these layers, you should be able to replace your Web Forms with a Silverlight interface and leave your BL and DAL intact.
Real projects tend to be somewhat messy, however, making such a transition more difficult. If you're using SqlDataSource you might have problems.
Those are some good points #Andy, and to expand on what he said:
i'm doing that very same thing right now. Because i have a rather comprehensive business layer, i have been able to do a lot of work (a couple of weeks worth), and in that time i have only had to add one function to that business layer. This is important because it reduces the amount of testing required. It also makes any remaining testing easier as it is easier to compare the output of the old version of the application with the new version.
One pattern that really helped to achieve this was the facade pattern. I built a WCF layer that sits over top of the business layer, and by using the facade pattern i can return results that are more suitable for the new silverlight interface, without interfering with the business layer.
It is most likely though that your new UI will have a drastically different architecture than the ASP.NET version. You will be able to achieve a far cleaner separation between UI, code and data. Some of the ASP.NET code that i was quite proud of looks positively mangy next to the equivalent silverlight code. Be prepared to chop your old code up, and eliminate those business rules from the immediate code behind :)
If you're goal is simply to replicate the UI behaviour as delivered by ASP.NET then yes assuming good partitioning you could re-use quite a bit of code. You'd have ask why you would want to do that though.
On the other hand if the goal is to provide a much richer interactive experience to the user then its likely that you'll find even a well designed business layer just doesn't behave the way such a radically different UI needs it to.

When NOT to use the Entity Framework

I have been playing around with the EF to see what it can handle. Also many articles and posts explain the various scenarios in which the EF can be used, however if miss the "con" side somehow. Now my question is, in what kind of scenarios should I stay away from the Entity Framework ?
If you have some experience in this field, tell me what scenarios don't play well with the EF. Tell me about some downsides you experienced where you whished you would have chosen a different technology.
The Vote of No Confidence lists several missteps and/or missing bits of functionality in the eyes of those who believe they know what features, and their implementations, are proper for ORM/Datamapper frameworks.
If none of those issues are a big deal to you, then I don't see why you shouldn't use it. I have yet to hear that it is a buggy mess that blows up left and right. All cautions against it are philosophical. I happen to agree with the vote of no confidence, but that doesn't mean you should. If you happen to like the way EF works, then go for it. At the same time I'd advise you to at least read the vote of no confidence and try to get a rudimentary understanding of each of the issues in order to make an informed decision.
Outside of that issue and to the heart of your question - You need to keep an eye on the Sql that is being generated so you can make tweaks before a performance problem gets into production. Even if you are using procs on the backend, I'd still look for scenarios where you may be hitting the database too many times and then rework your mappings or fetching scenarios accordingly.
One potentially big issue: Entity Framework 1.0 does not support persistence ignorance. This means that your business layer has a dependency on your data access layer.
If your entire application will be hosted in the same process (like a website on IIS) then this is no problem.
If, however, you have a need to remote your entities (to a Silverlight or Windows Mobile client for example), then your entities will not easily serialize across the wire. You will have to create separate data transfer classes to send your entities across the wire, and additional logic to marshal data between your entity classes and the DTOs.
Edit: spelling.
I'm also just at the 'playing around' stage, and although I was worried about the lack of built-in persistence agnosticism, I was sure there would be a "work-around".
In fact, not even a work-around in an n-tier architecture.
WCF + EF
If I've read the article correctly, then I don't see any problem serializing entities across the wire (using WCF) and also the persistence ignorance isn't a problem.
This is because I'd use PI mainly for unit-testing.
Unit Testing is possible! (i think)
In this system, we could simply use a mock service (by wrapping up the call to the service in ANOTHER interface based class which could be produced from a factory, for example). This would test OUR presenter code (there's no need to unit-test the EF/DAL - that's Microsoft's job!) Of course, integration tests would still be required to achieve full confidence.
If you wanted to write to a separate database, this would be done in the DAL layer, easily achieved via the config file.
My Tuppence Worth
My opinion - make up your own mind about the EF and don't be put off by all the doom and gloom regarding it that's doing the rounds. I'd guess that it's going to be around for a while and MS will iron out the faults in the next year or so. PI is definitely coming in according to Dan Simmons.
EDIT: I've just realised I jumped the gun and like a good politician didn't actually answer the question that was asked. Oops. But I'll leave this in in case anyone else finds it useful.
Not all data models map nicely to application Entities. If the mapping isn't relatively straightforward, I'd skip the Entity Framework. You'll find yourself doing handstands to make it work without any clear benefit.
Anders Hejlsberg had some interesting comments about object/relational mapping here.
Since EF does not support POCO, it can be difficult to write good unit tests against. That was one of the knocks against it in the Vote Of No Confidence.
If you're wanting to write good tests, EF will raise obstacles. You can work around them, but it is non-trivial.
Though both SQL CE 3.5 SP1 and Entity Framework 4.0 Beta 1 both support Identity Columns, using these two products together (at least up to the versions listed), Identity Columns are not supported. You will be required to set primary keys on your own.
Other than that, I've been enjoying EF with SQL CE.

Whats the pros and cons of using Castle Active Record vs Straight NHibernate?

Assuming that writing nhibernate mapping files is not a big issue....or polluting your domain objects with attributes is not a big issue either....
what are the pros and cons?
is there any fundamental technical issues? What tends to influence peoples choice?
not quite sure what all the tradeoffs are.
The biggest pro of AR is that it gives you a ready-made repository and takes care of session management for you. Either of ActiveRecordBase<T> and ActiveRecordMediator<T> are a gift that you would have ended up assembling yourself under NHibernate. Avoiding the XML mapping is another plus. The AR mapping attributes are simple to use, yet flexible enough to map even fairly 'legacy' databases.
The biggest con of AR is that it actively encourages you to think incorrectly about NHibernate. That is, because the default session management is session-per-call, you get used to the idea that persisted objects are disconnected and have to be Save()d when changes happen. This is not how NHibernate is supposed to work - normally you'd have session-per-unit-of-work or request or thread, and objects would remain connected for the lifecycle of the session, so changes get persisted automatically. If you start off using AR and then figure out you need to switch to session-per-request to make lazy loading work - which is not well explained in the docs - you'll get a nasty surprise when an object you weren't expecting to get saved does when the session flushes.
Bear in mind that the Castle team wrote AR as a complementary product for Castle Monorail, which is a Rails-like framework for .NET. It was designed with this sort of use in mind. It doesn't adapt well to a more layered, decoupled design.
Use it for what it is, but don't think about it as a shortcut to NHibernate. If you want to use NH but avoid mapping files, use NHibernate Attributes or better, Fluent NHibernate.
I found ActiveRecord to be a good piece of kit, and very suitable for the small/medium projects I've used it for. Like Rails, it makes many important decisions for you, which has the effect of keeping you focused you on the meat of the problem.
In my opinion pro's and cons are:
Pros
Lets you focus on problem in hand, because many decisions are made for you.
Includes mature, very usable infrastructure classes (Repository, Validations etc)
Writing AR attributes are faster than writing XML or NHibernate.Mapping.Attributes IMHO.
Good documentation and community support
It's fairly easy to use other NHibernate features with it.
A safe start. You have a get-out clause. You can slowly back into a bespoke NHibernate solution if you hit walls with AR.
Great for domain-first development (generating the db).
You might also want to look up the benefits and drawbacks of the ActiveRecord pattern
Cons
You can't pretend NHibernate isn't there - you still need to learn it.
Might not be so productive if you already have a legacy database to work with.
Not transparent persistence.
In-built mappings are comprehensive, but for some projects you might need to revert to NHibernate mappings in places. I haven't had this problem, but just a thought.
In general, I really like ActiveRecord and it's always been a time saver, mainly because I seem to happily accept the decisions and tools baked into the library, and subsequently spend more time focusing on the problem in hand.
I'd give it a try on a few projects and see what you think.
When I started using NHibernate, I didn't learn about Castle ActiveRecord until I had written my Mapping files and made my classes. At that point, I couldn't visibly discern what Castle Activerecord would give me, so I didn't use it.
The second time I used NHibernate, I simply used myGeneration to make the mapping files and the classes just by having it look at my database. That saved a lot of time by itself, and allowed me to (once again) not worry about Castle Active Record.
In reality, most of your time is going to be spent making the custom queries, and Castle Active Record won't necessarily help with that -- if you were to use myGeneration with NHibernate, you'd bypass most of the work you'd need to do anyway.
Edit: I don't want to seem like a cheerleader for either myGeneration or NHibernate. I just use the tool that allows me to get my work done quickly and easily. The less time I have to spend writing Data Access code, the better. It doesn't mean I can't do it -- but there's little sense in re-inventing the wheel each time you write a new application. Write SQL queries and Stored Procedures where needed, and no where else. If you're doing CRUD operations, an ORM is the way to go.
Edit #2: Castle Active Record may bring more to the table than I realize -- I don't know much other than what's on their website, but if it does bring more to the table, then it would help potential adopters to be able to readily see that on their site.

Categories

Resources