Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
i use the following pattern/style alot in my applications/programs and want to know if this is a common pattern that i don't know.
When i must write an app that is like a big function that get the input data from different sources, do the processing and create the output. Like the IPO Model (input-process-output).
I have one class/type that represents only my state/data which has no logic. Most of the time i name it Context, ExecutionContext or RuntimeContext. I also have multiple classes/types that contains only logic as stateless funtions (in C# static methods in static classes). After my entrypoint of the app, i create the context at first and use it then as arguments for my functions. The context holds the complate state/data of the app and all my static functions/methods manipulate the context. At the end of the functions chain and the call/execution is done and the context holds the final state if i need outputdata.
I try to create a picture that visualize these approach
The advantage of these pattern is
i can simple test my logic (small pieces of static functions) with
unittest
it is not so hard to use concurrent code (only the context need threadsafe code)
dependencies to other systems a mostly decoupled as abstractions (interfaces) in the context (for example an IDbContext). That make the testing of a bigger scope simple
And here is now my question. Is this a common pattern? When yes, how is it called?
Thanks for every hint! :)
regards
This looks like a Dataflow.
The functions are black boxes which act on the data provided to it. Dataflows are turing complete, and can even model traditional imperative flow control structures.
When you issue a request to ASP.NET MVC, it has an entry and at the end it returns an output. ASP.NET MVC is open source and there are tons of diagrams which explain the whole pipeline and how it works. It is also very customizable so developers can plug their own classes in, intercept certain events, hook into certain parts (filters, authentication, authorization etc.)
If I were you I would start looking into that and borrow some ideas from it. You don't event have to look at the source code. You can start by looking at the diagrams for the pipeline and see what it is doing and how it is doing it.
Right now your code is just functions which are executed in a serial manner. If you want to use Object Orientation, take advantage of interfaces and allow customization, event interception, hooking etc. then it will be difficult.
Here is the diagram in case you are interested.
Well, it's common in IoC-style apps, where services/repositories are singletons and, as so, stateless.
The advantage of this approach is that it saves a lot of memory and some time (no need for new instances of components to be spawned). Disadvantage is that you somehow lose the OOP aproach and also is hard to maintance in bigger picture without strong interfaces support and IoC/Dependency Injection container.
Also look at ThreadLocal<> mechanism build into .NET - that way you don't need to pass your context explicit, but rather than that access thred-scoped global variable that contains it (but then - you need to watch out when branching threads, another topic that IoC/DI handles).
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I am having this question when I'm using Unity with C#. But I think this question may arise in general for any OOP languages to people who are starting to learn to design OOP structures and when to use the observer pattern, or the publisher-subscriber pattern. The event I am specifically talking about is the event and delegates in C#.
I am looking for advices about using events. I am feeling event, or the Observer Pattern is so useful that I was getting afraid to be overusing it. I searched this question and found the community suggests that observer patterns is good to use to receive tweeter feeds, and that I should use the Observer Pattern if and only if using the pattern will reduce coupling. There was a discussion on when design patterns should be avoided, which suggested to prioritise to follow the SOLID principle. Some people provided a list of things to be careful about when using the observer pattern, among which commonly noted were: chains of observers and memory leaks.
The discussions seem to suggest that it is generally good to use the observer pattern when the list of subscribers for an event is expected to change at run time. For instance, RSS feeds will have different subscribers at any given time and there is no way for the programmer to know who is receiving it. So yes it seems a sweet spot to use the Observer Pattern here.
However, I still do not seem to be convinced whether I should favour of using this pattern, if the subscriber list is known to the developer at the compile time, and I want to perform Unit Testing.
Let's say in my game my character is entering a new area. Upon entering a new area, I want the game to:
Effect List
Show an GUI saying The Swamp of the Code Smell in the middle of the screen
Update the Quest Board to show quests specific to this area
Start to decrease my character's HP 5 per second because the area has a bad smell and increase MP by 10 because it feels good to be there
To achieve this, I am not convinced which way is a better design pattern:
Approach 1: Feed all info into Constructor for Unit Testing
MapEnter class has UI, GlobalDamage, Character, ... and all required classes in its constructor. It can then just call GlobalDamage.ApplyDamagePerSecond(myCharacter), UI.ShowText(), ...
I thought of this way because a talk about Unit Test suggsted classes must be isolated and that means classes must not new any other objects, and that can be achieved only by being given the list of interactable classes through its constructor.
However, posts on how to unit test observer patterns suggest I could test to ensure (1) events are well subscribed and unsubscribed, and (2) each method to be subscribed well functions independently. So I'm not so sure of this point.
On the other hand, I also seem to believe that when a class contains all of its references as its class variables from its constructor, it's easier to understand to what extent a class is responsible for just by looking at the class variables.
Now, the problem comes when I want to extend the MapEnter's effects. Let's say in addition to the three effects I initally planned, I now want to add a new functionality to it:
Effect List:
Start to play a BGM of the area
Then now, I will need to change the constructor of MapEnter class to know BgmPlayer. Change its implementation on OnMapEnter(). Change Unit Test Cases. And so on and so forth.
This may enable Unit Test but is strongly tied with other classes and so it appears to have high coupling.
Approach 2: Publisher-Subscriber Pattern
A big plus of this approach is that now it is super easy to add any new ideas to MapEnter. It's as easy as adding lines of code to add/remove methods to the event. MapEnter now need not worry about taking N parameters in its constructor.
Here I'm applying the Observer Pattern even though I know exactly who are going to listen to this event at the compile time, which means I could indeed achieve this without using the observer pattern here.
My concerns are:
Does this reduce coupling? Is it good to use the observer pattern in this kind of cases?
Does the Unit Test Argument in Approach 1 justify Approach 1?
Wouldn't it be easier to understand the code structure if a class has all references to what it needs to call as in Approach 1? If other programmers in my team silently added new subscribers to the event of MapEnter, how would I know that without going through all of the event's references? Or is it expected that I should do so for every single event in my application when things go wrong?
If I am justified to use the observer pattern here because it reduces coupling, literally all methods could, and should, be invoking other methods by events, as long as the listeners do not care about who is called first and there is no chains of observers. Then this pattern will be everywhere, and that sounds like it's going to be painful to understand the code even if I ensure there is only 1 or 2 levels of observer chains.
Thanks in advance.
Decision depends of your situation, I think it is important to understand difference beteeen Observer and Publisher/Subscriber patern.
Observer is mostly implemented in a synchronous way where the observable calls the appropriate method of all its observers when some event occurs. The Publisher/Subscriber pattern is mostly implemented in an asynchronous way (via message queue).
In the Observer/Observable pattern, the observers are aware of the observable. Whereas, in Publisher/Subscriber, publishers and subscribers don't need to know each other. They simply communicate with the help of message queues.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I have a C# project that needs to refactor. Project uses WPF+MVVM Light toolkit. I found the MainViewModel(...) constructor that receives about 50 parameters (factories interfaces). I think not. Am I right? I'm interested, because I want to improve my OOP thinking. Thanks.
P.S. Sorry for my grammar. Check me if you find errors.
Clean Code: A Handbook of Agile Software Craftsmanship, page 40, states that...
The ideal number of arguments for a function is zero (niladic). Next comes one (monadic), followed closely by two (dyadic). Three arguments (triadic) should be avoided where possible. More than three (polyadic) requires very special justification - and then shouldn't be used anyway.
Consider the book as guidelines for software design, and as such, recommendations when thinking about your code structure.
50 factory interfaces means your ViewModel is way too big and trying to do too many things at the same time. You should break it into separate ViewModels that will appear as properties on the main view model.
WPF allows composition and any framework that allows ViewModel first (ie anything except PRISM) will compose the corresponding views form the ViewModel it encounters. I'm not sure about MVVM Light but with Caliburn.Micro this is almost a non-issue.
If MVVM Light doesn't automate this, you'll have to bind the WPF controls that will contain a specific child model's view to the child model property on the main view model.
Another option is to bundle multiple factory interfaces into parameter objects and pass these to the constructor, bringing the number of parameters to 4-5 instead of 50. This is the Introduce Parameter Object refactoring. Some tools like ReSharper provide automation support for this refactoring.
If you combine this with a DI container the parameter objects can get initialized automagically simply by registering the individual interfaces.
The best solution though is to break the main model into submodels
You might look into using a Dependency Injector like Unity. Register all your Service, Factory, and associated Classes you need with the Unity Container and then you only need a single parameter for your ViewModel constructor which is the Unity Container.
50 parameters for a constructor seems insane to me...
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I'm about to get used to Ninject. I understand the principles of Dependency Injection and I know how to use Ninject. But I'm a little confused right now. The opinions drift apart when it comes to the Service Locator pattern.
My application is built upon a strictly modular basis. I try to use constructor injection as much as I can and this works pretty well, though it's a little bit messy (in my opinion).
Now when an extension (external code) wants to benefit from this system, wouldn't it be required to have access to the kernel? I mean right now I have one static class which gives access to all subsystems of my application. An alternative would be having access to the kernel (Service Locator pattern) and grabbing the subsystem dependencies from this one.
Here I can easily avoid giving access to the kernel or to be more explicit, not allowing dependencies to the kernel.
But if an extension now wants to use any components (interfaces) of my application, from any subsystem, it would be required to have access to the kernel in order to resolve them because Ninject does not automatically resolve as long as you're not using "kernel.Get()", right?
Peww, it's really difficult explaining this in an understandable way. I hope you guys get what I'm aiming for.
Why is it so "bad" having a dependency to the kernel or a wrapper of it? I mean, you can't avoid all dependencies. For example I still have the one to my "Core" class which gives access to all subsystems.
What if an extension wants to register it's own module for further usage?
I can't find any good answer for why this should be a bad approach, but I read it quite often. Moreover it is stated that Ninject does NOT use this approach unlike Unity or similiar frameworks.
Thanks :)
There are religious wars about this...
The first thing that people say when you mention a Service Locator is: "but what if I want to change my container?". This argument is almost always invalid given that a "proper" Service Locator could be abstract enough to allow you to switch the underlying container.
That said, use of a Service Locator has in my experience, made the code difficult to work with. Your Service Locator must be passed around everywhere, and then your code is tightly coupled to the very existence of a Service Locator.
When you use a Service Locator, you have two main options to maintain modules in a "decoupled" (loosely used here..) way.
Option 1:
Pass your locator into everything that requires it. Essentially this means your code becomes lots and lots of this sort of code:
var locator = _locator;
var customerService = locator.Get<ICustomerService>();
var orders = customerService.GetOrders(locator, customerId); // locator again
// .. further down..
var repo = locator.Get<ICustomerRepository>();
var orderRepo = locator.Get<IOrderRepository>();
// ...etc...
Option 2:
Smash all of your code into a single assembly and provide a public static Service Locator somewhere. This is even worse.. and ends up being the same as above (just with direct calls to the Service Locator).
Ninject is lucky (by lucky I mean - has a great maintainer/extender in Remo) in that it has a heap of extensions that allow you to fully utilise Inversion of Control in almost all parts of your system, eliminating the sort of code I showed above.
This is a bit against SO's policy but to extend on Simon`s answer i'll direct you to Mark Seeman's excellent blog post: http://blog.ploeh.dk/2010/02/03/ServiceLocatorisanAnti-Pattern/
Some of the comments to the blog post are very interesting, too.
Now to address your problem with ninject and extensions - which i assume are unknown to you/the composition root at the time when you write them - i would like to point out the NinjectModules. Ninject already features such an extensibility mechanism which is heavily used with/by all ninject.extension.XYZ dlls.
What you'll do is implement some FooExtensionModule : NinjectModule classes in your extensions. The Modules contain the Bind methods. Now you'll tell ninject to load all Modules from some .dll's. That's it.
It's explained in far more greater detail here: https://github.com/ninject/ninject/wiki/Modules-and-the-Kernel
Drawbacks:
Extensions depend on Ninject
you may need to recompile your extensions when you "update ninject"
as the software grows, it will make it more and more expensive to switch DI containers
When using Rebind issues may arise which are difficult to track down (well this is the case whether your using Modules or not.)
Especially when extension-developers don't know about other extensions, they might create identical or conflicting bindings (such as .Bind<string>().ToConst("Foo") and .Bind<string>().ToConst("Bar")). Again, this is also the case when you're not using Modules, but extensions add one more layer of complexity.
Advantage:
- simple and to the point, there's no extra layer of complication/abstraction which you'd need to abstract the container away.
I've used the NinjectModule approach in a not-so-small Application (15k unit/component tests) with a lot of success.
If all you need are simple bindings like .Bind<IFoo>().To<Foo>() without scopes and so on, you might also consider using a simpler system like putting attributes on classes, scanning for these, and creating the bindings in the composition root. This approach is way less powerful but because of that it is much more "portable" (adaptable to be used with other DI container), too.
Dependency Injection an late instantiation
the idea of composition root is, that (whenever possible) you create all objects (the entire object graph) in one go. For example, in the Main method you might have kernel.Get<IMainViewModel>().Show().
However, sometimes this is not feasible or appropriate. In such cases you will need to use factories. There's actually a bunch of answers to this regard on stackoverflow already.
There's three basic types:
To create an an instance of Foo which requires instances of Dependency1 and Dependency2 (ctor injected), create a class FooFactory which gets one instance of Dependency1 and Dependency2 ctor injected itself. The FooFactory.Create method will then do new Foo(this.dependency1, this.dependency2).
use ninject.extensions.Factory:
use Func<Foo> as a factory: you can have a Func<Foo> injected and then call it to crate an instance of Foo.
use interfaces and .ToFactory() binding (i recommend this approach. Cleaner code, better testability). For example: IFooFactory with method Foo Create(). Bind it like: Bind<IFooFactory>().ToFactory();
Extensions which replace Implementations
IMHO this is not one of the goals of dependency-injection containers. That doesn't mean it's impossible, but it just means you've got to figure it out yourself.
The simplest you could to with ninject would be to use .Rebind<IFoo>().To<SomeExtensionsFoo>(). However, as stated before, that's a bit brittle. If the Bind and Rebind are executed in the wrong sequence, it fails. If there's multiple Rebinds, the last will win - but is it the correct one?
So let's take it one step further. Imagine:
`.Bind<IFoo>().To<SomeExtensionsFoo>().WhenExtensionEnabled<SomeExtension>();`
you can devise your own custom WhenExtensionEnabled<TExtension>() extension method which extends the When(Func<bool> condition) syntax method.
You'll have to devise a way to detect whether an extension is enabled or not.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
Shorter version of the rest of the question: I've got a problem, and an IoC container is a tool. The problem sounds like something IoC might be able to solve, but I haven't read enough of the tool's instructions to know for sure. I'm curious if I picked the wrong tool or which chapter of the instruction manual I might be able to skip to for help.
I work on a library of Windows Forms controls. In the past year, I've stumbled into unit testing and become passionate about improving the quality of our automated tests. Testing controls is difficult, and there's not much information about it online. One of the nasty things is the separation of interaction logic from the UI glue that calls it leads to each control having several more dependencies than I would normally consider healthy for a class. Creating fakes for these elements when testing their integration with the control is quite tedious, and I'm looking into IoC for a solution.
There's one hurdle I'm not sure how to overcome. To set up the container, you need to have some bootstrapper code that runs before the rest of the application. In an application there is a very clear place for this stuff. In a library it's not so clear.
The first solution that comes to mind is creating a class that provides a static instance of the container and sets up the container in its type initializer. This would work for runtime, but in the test environment I'm not sure how well it would work. Tests are allowed to run in parallel and many tests will require different dependencies, so the static shared state will be a nightmare. This leads me to believe the container creation should be an instance method, but then I have a chicken and egg problem as a control would have to construct its container before it can create itself. The control's type initializer comes to mind, but my tests won't be able to modify this behavior. This led me to think of making the container itself a dependency of the control where user-visible constructors provide the runtime default implementation, but this leaves it up to my tests to set up their own containers. I haven't given this much thought, but it seems like this would be on the same level of effort as what I have now: tests that have to initialize 3-5 dependencies per test.
Normally I'd try a lot of things on my own to see what hurts. I'm under some harsh deadlines at the moment so I don't have much time to experiment as I write code; I only get brief moments to think about this and haven't put much to paper. I'm sure someone else has had a similar problem, so it'd be nice if I didn't have to reinvent the wheel.
Has anyone else attacked this problem? Are there some examples of strategies that will address these needs? Am I just a newbie and overcomplicating things due to my inexperience? If it's the latter, I'd love any resources you want to share for solving my ignorance.
Update:
I'd like to respond to Mark Seeman's answer, but it will require more characters than the comment field allows.
I'm already toying with presentation model patterns. The view in this case is the public control class and each has one or more controller classes. When some UI event is triggered on the control, the only logic it performs is deciding which controller methods need to be called.
The short expression of an exploration of this design is my controller classes are tightly coupled to their views. Based on the statement that DI containers work with loosely coupled code I'm reading "wrong tool for the job". I might be able to design a more loosely coupled architecture, at which point a DI container may be easier to use. But that's going to require some significant effort and it'd be an overhaul of shipped code; I'll have to experiment with new stuff before tiptoeing around the older stuff. That's a question for another day.
Why do I even want to inject strongly coupled types rather than using local defaults? Some of the seams are intended to be extensibility points for advanced users. I have to test various scenarios involving incorrect implementations and also verify I meet my contracts; mock objects are a great fit.
For the current design, Chris Ballard's suggestion of "poor man's DI" is what I've more or less been following and for my strongly coupled types it's just a lot of tedious setup. I had this vision that I'd be able to push all of that tedium into some DI container setup method, but the more I try to justify that approach the more convinced I become that I'm trying to hang pictures with a sledgehammer.
I'll wait 24 hours or so to see if discussion progresses further before accepting.
Dependency Injection (Inversion of Control) is a set of principles and patterns that you can use to compose loosely coupled code. It's a prerequisite that the code is loosely coupled. A DI Container isn't going to make your code loosely coupled.
You'll need to find a way to decouple your UI rendering from UI logic. There are lots of Presentation Patterns that describe how to do that: Model View Controller, Model View Presenter, Presentation Model, etc.
Once you have good decoupling, DI (and containers) can be used to compose collaborators.
Since the library should not have any dependency on the ioc framework itself we included spring.net ioc config xml-files with the standard configuration for that lib. That was modular in theory because every dll had its own sub config. All sub-configs were then assembled to become a part of the main confing.
But in reality this aproach was error prone and too interdependent: one lib config had to know about properties of others to avoid duplicates.
My conclusion: either use #Chris Ballard "poor man's dependency injection" or handle all dependencies in on big ugly tightly coupled config-module of the main app.
Depending on how complex your framework is, you may get away with handcoding constructor based dependency injection, with a parameterless default constructor (which uses the normal concrete implementation) but with a second constructor for injecting the dependency for unit test purposes, eg.
private IMyDependency dependency;
public MyClass(IMyDependency dependency)
{
this.dependency = dependency;
}
public MyClass() : this(new MyDefaultImplementation())
{
}
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I was reading an article at msdn about reflection but i was not able to understand it even 10% about its benifit, its usage.
Could you please provide some brief overview what is reflection and how can i take benefit from it.
Reflection allows you to write code that can inspect various aspects about the code itself.
It enables you to do simple things like:
Check the type of an object at runtime (simple calls to typeof() for example)
Inspect the Attributes of an object at runtime to change the behavior of a method (the various serialization methods in .NET)
To much more complicated tasks like:
Loading an assembly at runtime, finding a specific class, determining if it matches a given Interface, and invoking certain members dynamically.
The earlier is much more common usage. The later is helpful to developers working on plug-in architectures for their applications or people who want to swap assemblies at runtime depending on configuration changes.
Reflection is a way for you to programmatically discover Types at runtime. This is very important because .NET languages are strongly-typed. Being able to access that metadata is extremely useful.
A big thing right now (fluent interfaces/adapters) rely heavily on reflection. In particular, static reflection is pretty big. If you want to see specific examples and a good explanation of static reflection, check out:
http://jagregory.com/writings/introduction-to-static-reflection/
http://www.lostechies.com/blogs/gabrielschenker/archive/2009/02/03/dynamic-reflection-versus-static-reflection.aspx
Of course, this a small subset of reflection in general. If you'd like more info about the general use of reflection, check out Apress Pro C# 2008 and the .NET 3.5 Platform, Fourth Edition, Chapter 16. It delves pretty in-depth into the .NET type system and how that is used within libraries and at runtime.
Reflection lets your code call methods and properties that you didn't know about when the code was compiled. One of the built in classes that uses this is the XmlSerializer. You can pass it any object you want to convert to XML. It asks the object what all of its properties are using reflection then is able to make an XML document that contains the needed elements to represent the object.
Reflection is the ability of types to provide information about themselves. For example, an assembly can tell you what it contains, a type can tell you its methods, properties and so on.
Dynamically providing this information is useful in many ways. One simple example to think about is metadata used by web services - when a person "consumes" a web service, a proxy class is generated for their client. This proxy is generated from a WSDL document and that most often is generated from type metadata generated via reflection.
Another simple example is dynamically loading types in order to perform some unit of work. One project I worked on utilized reflection to load "rules" from a database to apply to inputs in the system.
Reflection helps you do metaprogramming, which is unarguably one of the coolest programming techniques. Google for metaprogramming for more information.
One of the benefits of reflection is that it allows you to save all changes in the dataset designer like a transaction in SQL
Reflection is powerful namespace available in asp.net. by using it we can create dynamic object runtime and invoke it.it mainly used in developing framework type application.