At which level should I apply dependency injection? Controller or Domain? - c#

I would like to hear from you what are de the main advantages and drawbacks in applying dependency injection at the controller level, and/or domain level.
Let me explain; if I receive a IUserRepository as param for my User, I may proceed in two ways:
I inject IUserRepository direct on my domain object, then I consume User at controller level without newing objects, it means, I get them ready from the DI container.
I inject IUserRepository on my controller (say, Register.aspx.cs), and there I new all my domain objects using dependencies that came from the DI container.
Yesterday, when I was talking to my friend, he told me that if you get your domain objects from the container you loose its lifecicle control, as the container manages it for you, he meant that it could be error prone when dealing with large xml configuration files. Opinion which disagree as you may have a tests that loops through every domain object within an assembly and then asks the container whether thats a singleton, request scope, session scope or app escope. It fails if any of them are true. A way of ensuring that this kind of issue wont happen.
I fell more likely to use the domain approach (1), as I see a large saving on repetitive lines of code at controller level (of course there will be more lines at XML file).
Another point my friend rose was that, imagine that for any reason youre obligated to change from di container A to B, and say that B has no support for constructor injection (which is the case for a seam container, Java, which manipulates BC or only do its task via setter injection), well, his point is that, if I have all my code at controller level I'm able to refactor my code in a smoothly maner, as I get access to tools like Auto-Refactoring and Auto-Complete, which is unavailable when youre dealing with XML files.
Im stuck at this point, as I should have a decision to make right away.
Which approach should I leverage my architecture?
Are there other ways of thinking???
Do you guys really think this is a relevant concern, should I worry about it?

If you want to avoid an anemic domain model you have to abandon the classic n-tier, n-layer CRUDY application architecture. Greg Young explains why in this paper on DDDD. DI is not going to change that.
CQRS would be a better option, and DI fits very well into the small, autonomous components this type of architecture tends to produce.

I'm not into the Java sphere, but according to your details in your questions it seems like you use some kind of MVC framework (since you deal with Controllers and domain). But I do have an opinion about how to use DI in a Domain Driven architecture.
First there are several ways of doing DDD: Some uses MVC in presentation and no application service layer between MVC and Domain. Other uses MVP (or MVVM) and no service layer. BUT I think some people will agree on me that you very rarely inject repositories (or other services...). I would recommend to inject Repositories in Command (using MVC and no service layer), Presenter (if you use MVP) or Application Services (if you use service layer). I mostly use an application layer where each service get the repositories they need injected in constructor.
Second I wouldn't worry about switching between IoC containers. Most container framework today support ctor injection and can auto-resolve parameters. Now I know that you're a Java developer and I'm a MS developer, but MS Practices team has a Common Service locator that can helps you in producing code that are extremely non-dependent of which container framework you uses. There is probably some similar in the Java community.
So go for option 2. Hope I pushed you into right direction.

Related

How to use dependency injection in enterprise projects

Imagine you have an application with several hundreds of classes implementing dozens of "high level" interfaces (meaning component level). Is there a recommend way of doing dependecy injection (like with unity). Should there be a "general container" that can be used for bootstrapping, accessible as a Singleton? Should a container be passed around, where all instances can RegisterInstance? Should everything be done by RegisterType somewhere in the startup? How can the container be made accessible when needed. Constructor injection seems false, controverse of being the standard way, you have to pass down interfaces from a component level to the very down where it is used right on startup, or a reference is hold ending up in a "know where you live" antipattern.
And: having a container "available" may bring developers to the idea of resolving server components in client context. how to avoid that?
Any discussion welcome!
edit for clarification
I figured out a somewhat realworld example to have a better picture of what problems i see.
lets imagine the application is a hifi system.
the system has cd player (integrated in a cd-rack) and
an usb port (integrated in an usb rack) to play music from.
now, the cd player and the usb port shall be able to play mp3 music.
i have an mp3 decoder somewhere around, which is injectable.
now i start the hifi system. there is no cd inserted yet and
no usb stick pluged in. i do not need a mp3 decoder now.
but with constructor injection, i have to already inject
the dependency into the cd rack and the usb rack.
what if i never insert a mp3 cd or an mp3 usb stick?
shall i hold an reference in the cd rack, so when an mp3 cd
is inserted, i have a decorder on hand? (seems wrong to me)
the decoder is needed in a subsystem of the cd rack, which
is only started if a mp3 gets inserted. now i have no container
in the cd rack, what about constructor injection here?
First of all, Dependency Injection is a design pattern that does not require a container. The DI pattern states:
Dependency injection is a software design pattern that allows a choice of component to be made at run-time rather than compile time
Take for example Guice (java dependency injection framework), in Java Guice is a DI framework but it is not a container itself.
Most of the DI tools in .Net are actually containers, so you need to populate the container in order to be able to inject the dependency
I do not like the idea to have to register every component every time in a container, I simply hate that. There are several tools that help you auto register components based on conventions, I do not use Unity, but I can point you for example to Ninject or AutoFac
I am actually writing a small utility to auto register components based on conventions using practically any DI tool, it is still in dev phase
About your questions:
Should there be a "general container" that can be used for bootstrapping, accessible as a Singleton?
The answer is yes, (there's a tool to abstract the DI tool used, it is called ServiceLocator) that's how DI tools work, there is a static container available to the application, however, it is not recommended to use it inside the domain objects to create instances, that's considered an anti-pattern
BTW I have found this tool really useful to register components at runtime:
http://bootstrapper.codeplex.com/
Should a container be passed around, where all instances can RegisterInstance?
No. that would violate the law of Demeter. If you decide to use a container is better to register the components when the application starts
How can the container be made accessible when needed
Well using the Common Service Locator you could use it anywhere in your application, but like I said, it is not recommended to use it inside the domain objects to create the required instances, instead, inject the objects in the constructor of the object and let the DI tool to automatically inject the correct instance.
Now based on this:
Constructor injection seems false, controverse of being the standard way, you have to pass down interfaces from a component level to the very down where it is used right on startup, or a reference is hold ending up in a "know where you live" antipattern
Makes me think that you are not writing unit tests heavily for your application which is bad. So my suggestion is, before choosing between which DI tool you are going to use, or before taking all the answers you get to this question into consideration, refer to the following links which are focus on one thing: Write clean testable code, this is by far the best source you could get to answer yourself your own question
Clean Code talks:
http://www.youtube.com/watch?v=wEhu57pih5w&feature=player_embedded
http://www.youtube.com/watch?v=RlfLCWKxHJ0&feature=player_embedded
Articles
http://misko.hevery.com/2010/05/26/do-it-yourself-dependency-injection/
http://misko.hevery.com/code-reviewers-guide/
Previous link in PDF http://misko.hevery.com/attachments/Guide-Writing%20Testable%20Code.pdf
The following links are super highly recommended
http://misko.hevery.com/2008/09/30/to-new-or-not-to-new/
http://www.loosecouplings.com/2011/01/how-to-write-testable-code-overview.html
First, there is the Composition Root pattern, you set up your dependencies as soon as possible, Main() in desktop, global.asax/app_start in web. Then, rely on constructor injection as this makes your dependencies clear.
However, you still need something to actually RESOLVE dependencies. There are at least two approaches known to me.
First one consist in using the Service Locator (which is almost equal to making the container a singleton). This is considered an antipattern for a lot of people, however it just works great. Whenever you need a service, you ask your container for it:
... business code...
var service = ServiceLocator.Current.GetInstance<IMyService>();
service.Foo();
Service Locator just uses a container you set up in the Composition Root.
Another approach consist in relying on object factories available as singletons with the container injected into them:
var service = IMyServiceFactory.Instance.CreateService();
The trick is that implementation of the factory uses the container internally to resolve the service. However, this approach, with an additional factory layer, makes your business code independent and unaware of the IoC! You are free to completely redesign the factories to not to use IoC internally and still maintain every single line of business code.
In other words, it's just a trick then to hide the container/locator behind a factory layer.
While tempted to use the former approach (rely directly on the locator) I prefer the latter. It just feels cleaner.
I wonder if there are other viable alternatives here.
Should a container be passed around
No, since this leads to the Service Locator anti-pattern.
where all instances can RegisterInstance
Services should be registered in the start-up path of the application, not by types themselves. When you have multiple applications (such as web app, web service, WPF client), there will often be a common bootstrapper project that wires all services together for shared layers (but each application will still have its unique wiring, since no application behaves the same).
Should everything be done by RegisterType somewhere in the startup
Yes, you should wire everything up at start-up.
How can the container be made accessible when needed.
You shouldn't. The application should be oblivious to the use of a container (if any container is used, since this is optional). If you don't do this, you will make a lot of things much harder, such as testing. You can however, inject the container in types that are defined in the startup path of the application (a.k.a. the Composition Root). This way the application keeps clean from knowing anything about the container.
Constructor injection seems false, controverse of being the standard way
Constructor injection is the prefered way of injecting dependencies. However, it can be challanging to refactor an existing application towards constructor injection. In **rare* circumstances where constructor injection doesn't work, you can revert to property injection, or when it is impossible to build up the complete object graph, you can inject a factory. When the factory implementation is part of the composition root, you can let it depend on the container.
A pattern I found very useful, that can be built on top of the Dependency Injection pattern and the SOLID design principles, is the command / handler pattern. I found this a useful pattern in smaller apps, but it will shine when applications get big, such as enterprise applications.
now i start the hifi system. there is no cd inserted yet and no usb
stick pluged in. i do not need a mp3 decoder now.
This seems like a perfect fit for Setter injection (=property injection in C#). Use a NeutralDecoder or NullDecoder as a default and inject an Mp3Decoder when you need it. You can do it by hand or using a DI container and conditional/late binding.
http://blog.springsource.com/2007/07/11/setter-injection-versus-constructor-injection-and-the-use-of-required/
We usually advise people to use constructor injection for all
mandatory collaborators and setter injection for all other properties.

Instantiate service layer and repositories once during life of web application

During the life of my web application I know all of my services and repositories will be called. I would like to instantiate them once during the startup of the web application and refer to the instantiated references in my code.
Is there a common pattern for instantiating your services/repositories only once during the lifetime of a web application without making them static or singletons.
I would like to avoid making my services/repositories static classes or singletons for testability however instantiating them on every web request doesn't seem right when they are designed to be stateless and I know they will all be needed during the lifetime of the application.
I am using c#/asp.net.
The concept you need is called IoC / DI, and there are many frameworks for this. When you have a class like CustomerService and you need a CustomerRepository in it, by definition that is a dependancy, and you should pass it through the constructor of CustomerService - but then there is the question where you will instantiate CustomerService? Well, who uses that service should get it through constuctor too, it is maybe a CustomerPresenter, or some other class, irrelevant. My point is that with doing dependancy injections you structure your code to a very single point where an IoC / DI framework resolves those dependancies accoring to your rules.
At the very top of the program, you would have something like:
ICustomerPresenter presenter = IoC.Resolve<ICustomerPresenter>();
and everything will automatically come together behind the scenes.
To achieve this, here is an example with StructureMap:
For<ICustomerPresenter>().Use<CustomerPresenter>();
For<ICustomerService>().Singleton().Use<CustomerService();
For<ICustomerRepository>().Singleton().Use<CustomerRepository>();
With this, you keep the testability. I've simplified things a lot here, so this isn't much usable as-is, but there are plenty IoC / DI resources online, so check them out.
Note: for web applications you will want to check out handling lifecycle per request, you will rarely have singletons for entire web application.
Dependency Injection frameworks will take care of the lifetime of your objects.
Eg
container.RegisterType<MyService>().Singleton();
There are many DI frameworks choose what suits you best.

Question on DI and how to solve some problems

I'm a newbie to Dependency Injection. I have never used and never even undestood what it is exatcly all about, but after my last attack on this topic I found out that is a way of uncoupling an object and its dependencies, once they are not responsible for instantiating the concrete versions of its dependencies anymore, as now the container will do it for us and deliver the ready object in our hands.
Now the point is; "when should I use it?", ALWAYS??? Actually, as I'm a newbie and have never even seen a project that uses this pattern I can't undestand how I should apply it to my domain objects!!! It seems to me that I will nevermore instantiate my objects and the container will always do it for me, but then comes some doubts...
1) What about oobjects that part of its dependencies comes from the UI, for example;
public class User(String name, IValidator validator)
Say that I get the user name from the UI, so how will the conatiner know it and still delliver this object for me?
2) Theres other situation I'm facing; if a dependency is now an object that is already instantiated, say... a SINGLETON object, for example . I saw theres settings regarding out the scope of life of the dependency beign injected (im talking about Spring.NET, eg; http request scope)... BUT, request and other web related things are on my presentation layer, so how could I link both my presentation layer and my domain layer without breaking any design rule (as my domain should be totally unaware of where its is being consumed, not to have layer dependency, etc)
Im eager to hear from you all. Thanks very much.
In general, once you go IoC, you tend to want to register EVERYTHING with IoC and have the container spit out fully-hydrated objects. However, you bring up some valid points.
Perhaps a definition of "dependency" is in order; at its broadest, a dependency is simply a set of functionality (interface) that a given class requires a concrete implementation of in order for the class to work correctly. Thus, most non-trivial programs are full of dependencies. To promote ease of maintenance, loose coupling of all dependencies is generally preferred. However, even when loosely coupled, you don't need to automate instantiation of dependencies if those objects require specialized information that you don't want to pollute your IoC registry with. The goal is to loosely couple usage, not necessarily creation.
Concerning point 1, some IoC frameworks don't do well with being given external parameters. However, you can usually register a delegate as a factory method. That delegate may belong to an object like a Controller that is given external information by the UI. Logins are a perfect example: Create an object, say a LoginController, and register it with IoC as your ILoginController. You'll reference that controller on your Login page, it will be injected when the Login page is instantiated, and the login page will pass it the credentials entered. The Controller will then perform authentication, and will have a method GetAuthenticatedUser() that produces a User object. You can register this method with IoC as a Factory for Users, and whenever a User is needed, the factory delegate will either be evaluated, or passed wholesale to the dependent method which will call it when it really needs the User.
On point 2, setting up a single instance of an object is a strength of the IoC pattern. Instead of creating a true singleton, with a private instance constructor, static instance and static constructor to produce an instance, you simply register the class with IoC and tell it to only instantiate it once and use that one instance for all requests. The strength is the flexibility; if you later want there to be more than one instance, you just change the registration. You won't break any design pattern rules either way; the view will always have a Controller injected, whether that Controller is the same for all pages or a new instance per request.
1) this contructor is probably not the right one to use, may be you are injecting the validator in the wrong place/way.
2)Neighter View nor Model and nor Controller should be aware of there is an IoC, it should lie in the background architecture ( where MVC components are actually instantiated )
You should use IoC when you feel the architecture can became complex and has to be mantained by many people. If you are writing an enterprise application, or a UI you think to extend with plugins, you probably need it, if you are writing a command line utility, probably not.
You should use dependency injection whenever you want any of the following benefits:
The ability to replace modules easily
The ability to reuse modules between parts of the application, or different applications
When you want to do parallel development, so that components of a system can be developed in isolation and in parallel because they depend on abstractions
When you want easier maintenance of a system because of loose coupling
When you want testability (a specialisation of replacing modules). This is one of the biggest reasons for using DI
To answer your other questions:
1) You can configure many IoC containers so that certain constructor parameters can be specified, whilst others are resolved by the container. However, you may need to think about refactoring that piece of code, as a UserFactory may be more appropriate which takes the validator dependency, and has a NewUser method which takes a user name and returns a new user (either instantiating it directly or resolving from the container).
2) Each application you build will have a composition root, where your container is configured, and the root object is resolved. Each app will therefore have its own IoC configuration, so there is an expected link between the application type and the configuration settings. Any common abstraction registrations can be placed in configuration code which can be shared amongst all applications.

Can anyone explain to me, at length, how to use IOC containers?

I use dependency injection through parameters and constructors extensively. I understand the principle to this degree and am happy with it. On my large projects, I end up with too many dependencies being injected (anything hitting double figures feels to big - I like the term 'macaroni code').
As such, I have been considering IOC containers. I have read a few articles on them and so far I have failed to see the benefit. I can see how it assists in sending groups of related objects or in getting the same type over and over again. I'm not sure how they would help me in my projects where I may have over a hundred classes implementing the same interface, and where I use all of them in varying orders.
So, can anybody point me at some good articles that not only describe the concepts of IOC containers (preferably without hyping one in particular), but also show in detail how they benefit me in this type of project and how they fit into the scope of a large architecture?
I would hope to see some non-language specific stuff but my preferred language if necessary is C#.
Inversion of Control is primarily about dependency management and providing testable code. From a classic approach, if a class has a dependency, the natural tendency is to give the class that has the dependency direct control over managing its dependencies. This usually means the class that has the dependency will 'new' up its dependencies within a constructor or on demand in its methods.
Inversion of Control is just that...it inverts what creates dependencies, externalizing that process and injecting them into the class that has the dependency. Usually, the entity that creates the dependencies is what we call an IoC container, which is responsible for not only creating and injecting dependencies, but also managing their lifetimes, determining their lifestyle (more on this in a sec), and also offering a variety of other capabilities. (This is based on Castle MicroKernel/Windsor, which is my IoC container of choice...its solidly written, very functional, and extensible. Other IoC containers exist that are simpler if you have simpler needs, like Ninject, Microsoft Unity, and Spring.NET.)
Consider that you have an internal application that can be used either in a local context or a remote context. Depending on some detectable factors, your application may need to load up "local" implementations of your services, and in other cases it may need to load up "remote" implementations of your services. If you follow the classic approach, and create your dependencies directly within the class that has those dependencies, then that class will be forced to break two very important rules about software development: Separation of Concerns and Single Responsibility. You cross boundaries of concern because your class is now concerned about both its intrinsic purpose, as well as the concern of determining which dependencies it should create and how. The class is also now responsible for many things, rather than a single thing, and has many reasons to change: its intrinsic purpose changes, the creation process for its dependencies changes, the way it finds remote dependencies changes, what dependencies its dependencies may need, etc.
By inverting your dependency management, you can improve your system architecture and maintain SoC and SR (or, possibly, achieve it when you were previously unable to due to dependencies.) Since an external entity, the IoC container, now controls how your dependencies are created and injected, you can also gain additional capabilities. The container can manage the life cycles of your dependencies, creating and destroying them in more flexible ways that can improve efficiency. You also gain the ability to manage the life styles of your objects. If you have a type of dependency that is created, used, and returned on a very frequent basis, but which have little or no state (say, factories), you can give them a pooled lifestyle, which will tell the container to automatically create an object pool for that particular dependency type. Many lifestyles exist, and a container like Castle Windsor will usually give you the ability to create your own.
The better IoC containers, like Castle Windsor, also provide a lot of extendability. By default, Windsor allows you to create instances of local types. Its possible to create Facilities that extend Windsor's type creation capabilities to dynamically create web service proxies and WCF service hosts on the fly, at runtime, eliminating the need to create them manually or statically with tools like svcutil (this is something I did myself just recently.) Many facilities exist to bring IoC support existing frameworks, like NHibernate, ActiveRecord, etc.
Finally, IoC enforces a style of coding that ensures unit testable code. One of the key factors in making code unit testable is externalizing dependency management. Without the ability to provide alternative (mocked, stubbed, etc.) dependencies, testing a single "unit" of code in isolation is a very difficult task, leaving integration testing the only alternative style of automated testing. Since IoC requires that your classes accept dependencies via injection (by constructor, property, or method), each class is usually, if not always, reduced to a single responsibility of properly separated concern, and fully mockable dependencies.
IoC = better architecture, greater cohesion, improved separation of concerns, classes that are easier to reduce to a single responsibility, easily configurable and interchangeable dependencies (often without requiring a recompilation of your code), flexible dependency life styles and life time management, and unit testable code. IoC is kind of a lifestyle...a philosophy, an approach to solving common problems and meeting critical best practices like SoC and SR.
Even (or rather, particularly) with hundreds of different implementations of a single interface, IoC has a lot to offer. It might take a while to get your head fully wrapped around it, but once you fully understand what IoC is and what it can do for you, you'll never want to do things any other way (except perhaps embedded systems development...)
If you have over a hundred of classes implementing a common interface, an IoC won't help very much, you need a factory.
That way, you may do the following:
public interface IMyInterface{
//...
}
public class Factory{
public static IMyInterface GetObject(string param){
// param is a parameter that will help the Factory decide what object to return
// (that is only an example, there may not be any parameter at all)
}
}
//...
// You do not depend on a particular implementation here
IMyInterface obj = Factory.GetObject("some param");
Inside the factory, you may use an IoC Container to retrieve the objects if you like, but you'll have to register each one of the classes that implement the given interface and associate them to some keys (and use those keys as parameters in GetObject() method).
An IoC is particularly useful when you have to retrieve objects that implement different interfaces:
IMyInteface myObject = Container.GetObject<IMyInterface>();
IMyOtherInterface myOtherObject Container.GetObject<IMyOtherInterface>();
ISomeOtherInterface someOtherObject = Container.GetObject<ISomeOtherInterface>();
See? Only one object to get several different type objects and no keys (the intefaces themselves are the keys). If you need an object to get several different object, but all implementing the same interface, an IoC won't help you very much.
In the past few weeks, I've taken the plunge from dependency-injection only to full-on inversion of control with Castle, so I understand where your question is coming from.
Some reasons why I wouldn't want to use an IOC container:
It's a small project that isn't going to grow that much. If there's a 1:1 relationship between constructors and calls to those constructors, using an IOC container isn't going to reduce the amount of code I have to write. You're not violating "don't repeat yourself" until you're finding yourself copying and pasting the exact same "var myObject = new MyClass(someInjectedDependency)" for a second time.
I may have to adapt existing code to facilitate being loaded into IOC containers. This probably isn't necessary until you get into some of the cooler Aspect-oriented programming features, but if you've forgotten to make a method virtual, sealed off that method's class, and it doesn't implement an interface, and you're uncomfortable making those changes because of existing dependencies, then making the switch isn't quite as appealing.
It adds an additional external dependency to my project -- and to my team. I can convince the rest of my team that structuring their code to allow DI is swell, but I'm currently the only one that knows how to work with Castle. On smaller, less complicated projects, this isn't going to be an issue. For the larger projects (that, ironically, would reap the most benefit from IOC containers), if I can't evangelize using an IOC container well enough, going maverick on my team isn't going to help anybody.
Some of the reasons why I wouldn't want to go back to plain DI:
I can add or take away logging to any number of my classes, without adding any sort of trace or logging statement. Having the ability for my classes to become interwoven with additional functionality without changing those classes, is extremely powerful. For example:
Logging: http://ayende.com/Blog/archive/2008/07/31/Logging--the-AOP-way.aspx
Transactions: http://www.codeproject.com/KB/architecture/introducingcastle.aspx (skip down to the Transaction section)
Castle, at least, is so helpful when wiring up classes to dependencies, that it would be painful to go back.
For example, missing a dependency with Castle:
"Can't create component 'MyClass' as
it has dependencies to be satisfied.
Service is waiting for the following
dependencies:
Services:
- IMyService which was not registered."
Missing a dependency without Castle:
Object reference is not set to an
instance of an object
Dead Last: The ability to swap injected services at runtime, by editing an Xml File. My perception is that this is the most tauted feature, but I see it as merely icing on the cake. I'd rather wire up all my services in code, but I'm sure I'll run into a headache in the future where my mind will be changed on this.
I will admit that -- being a newbie to IOC and Castle -- I'm probably only scratching the surface, but so far, I genuinely like what I see. I feel like the last few projects I've built with it are genuinely capable of reacting to the unpredictable changes that arise from day to day at my company, a feeling I've never quite had before.
Try these:
http://www.martinfowler.com/articles/injection.html
http://msdn.microsoft.com/en-us/library/aa973811.aspx
I have no links but can provide you with an example:
You have a web controller that needs to call a service which has a data access layer.
Now, I take it in your code you are constructing these objects your self at compile time. You are using a decent design pattern, but if you ever need to change the implementation of say the dao, you have to go into you code and remove the code that sets this dependency up, recompile / test/ deploy. But if you were to use a IOC container you would just change the class in the configuration and restart the application.
Jeremy Frey misses one of the biggest reasons for using an IOC container: it makes your code easier to mock and test.
Encouraging the use of interfaces has lots of other nice benefits: better layering, easier to dynamically generate proxies for things like declarative transactions, aspect-oriented programming and remoting.
If you think IOC is only good for replacing calls to "new", you don't get it.
IoC containers usually do the dependency injections which in some projects are not a big deal , but some of the frameworks that provide IoC containers offer other services that make it worth to use them.
Castle for example has a complete list of services besides an IoC container.Dynamic proxies ,Transaction management and NHibernate facilities are some of them.
Then I think you should consider IoC contianers as a part of an application framework.
Here's why I use an IoC container:
1.Writing unit tests will be easier .Actually you write different configurations to do different things
2.Adding different plugins for different scenarios(for different customers for example)
3.Intercepting classes to add different aspects to our code.
4.Since we are using NHibernate ,Transaction management and NHibernate facilites of Castle are very helpful in developing and maintaining our code .
It's like every technical aspects of our application is handled using an application framework and we have time to think about what customers really want.

MVC lots of services make the controller constructor very large .

i wind up having about 20 different parameters in the constructor of the model class, one for each service? Is this normal or a sign that something is off.
I think, categorically, that your controller is interacting with too many services. I've not seen your code - so I'm going off assumptions - but it seems to me that your controller is composing business logic by calling numerous "small" services, rather than drawing on fewer, "larger" services that compose business logic from the smaller services.
Have a look around for information about "orchestration services" vs "entity" or "capability" services and you'll see what I mean. If you create orchestration services that provide your controllers with the logic they require, your architecture is improved because your controllers really should not contain any business logic at all.
I really think that the number of services you consume is the issue here. IoC containers may go some way to resolve how you bind types to your injection parameters etc., but I think the problem is your architecture at this point.
You might try consolidating some services or think about refactoring the controller-view parts in to smaller scoped components. Also, a dependency injection style framework like Spring can help with things like this.
Allthough I don't know your setup. 20 seems a bit much I think you go against the SRP (Single responsibility priniciple). But since I can't see your code it is impossible to tell. If you really need all these services in that one model class then perhaps you need to put them in a factoryclass and use that as a parameter.
It is hard to give any good answer on this since we don't know your domain.
As #Matt said a dependency injection could help you here and sprint.NET is a good one and there are several others.
Seeing as you mention MVP in particular, you should at least look at Ent Lib 4.1 which now has Unity, Microsoft's take on DI. Their codeplex site is probably a good place to start if this is new.
There are also software factories that integrate with visual studio and give you tools for creating MVP for web sites as linked or web services. These come from pattern and practices too.

Categories

Resources