i'm working on a fork of the Divan CouchDB library, and ran into a need to set some configuration parameters on the httpwebrequest that's used behind the scenes. At first i started threading the parameters through all the layers of constructors and method calls involved, but then decided - why not pass in a configuration delegate?
so in a more generic scenario,
given :
class Foo {
private parm1, parm2, ... , parmN
public Foo(parm1, parm2, ... , parmN) {
this.parm1 = parm1;
this.parm2 = parm2;
...
this.parmN = parmN;
}
public Bar DoWork() {
var r = new externallyKnownResource();
r.parm1 = parm1;
r.parm2 = parm2;
...
r.parmN = parmN;
r.doStuff();
}
}
do:
class Foo {
private Action<externallyKnownResource> configurator;
public Foo(Action<externallyKnownResource> configurator) {
this.configurator = configurator;
}
public Bar DoWork() {
var r = new externallyKnownResource();
configurator(r);
r.doStuff();
}
}
the latter seems a lot cleaner to me, but it does expose to the outside world that class Foo uses externallyKnownResource
thoughts?
This can lead to cleaner looking code, but has a huge disadvantage.
If you use a delegate for your configuration, you lose a lot of control over how the objects get configured. The problem is that the delegate can do anything - you can't control what happens here. You're letting a third party run arbitrary code inside of your constructors, and trusting them to do the "right thing." This usually means you end up having to write a lot of code to make sure that everything was setup properly by the delegate, or you can wind up with very brittle, easy to break classes.
It becomes much more difficult to verify that the delegate properly sets up each requirement, especially as you go deeper into the tree. Usually, the verification code ends up much messier than the original code would have been, passing parameters through the hierarchy.
I may be missing something here, but it seems like a big disadvantage to create the externallyKnownResource object down in DoWork(). This precludes easy substitution of an alternate implementation.
Why not:
public Bar DoWork( IExternallyKnownResource r ) { ... }
IMO, you're best off accepting a configuration object as a single parameter to your Foo constructor, rather than a dozen (or so) separate parameters.
Edit:
there's no one-size-fits-all solution, no. but the question is fairly simple. i'm writing something that consumes an externally known entity (httpwebrequest) that's already self-validating and has a ton of potentially necessary parameters. my options, really, are to re-create almost all of the configuration parameters this has, and shuttle them in every time, or put the onus on the consumer to configure it as they see fit. – kolosy
The problem with your request is that in general it is poor class design to make the user of the class configure an external resource, even if it's a well-known or commonly used resource. It is better class design to have your class hide all of that from the user of your class. That means more work in your class, yes, passing configuration information to your external resource, but that's the point of having a separate class. Otherwise why not just have the caller of your class do all the work on your external resource? Why bother with a separate class in the first place?
Now, if this is an internal class doing some simple utility work for another class that you will always control, then you're fine. But don't expose this type of paradigm publicly.
Related
as for my understanding, part of writing (unit-)testable code, a constructor should not do real work in constructor and only assigning fields. This worked pretty well so far. But I came across with a problem and I'm not sure what is the best way to solve it. See the code sample below.
class SomeClass
{
private IClassWithEvent classWithEvent;
public SomeClass(IClassWithEvent classWithEvent)
{
this.classWithEvent = classWithEvent;
// (1) attach event handler in ctor.
this.classWithEvent.Event += OnEvent;
}
public void ActivateEventHandling()
{
// (2) attach event handler in method
this.classWithEvent.Event += OnEvent;
}
private void OnEvent(object sender, EventArgs args)
{
}
}
For me option (1) sounds fine, but it the constructor should only assign fields. Option (2) feels a bit "too much".
Any help is appreciated.
A unit test would test SomeClass at most. Therefore you would typically mock classWithEvent. Using some kind of injection for classWithEvent in ctor is fine.
Just as Thomas Weller said wiring is field assignment.
Option 2 is actually bad IMHO. As if you omit a call to ActivateEventHandling you end up with a improperly initialized class and need to transport knowledge of the requirement to call ActivateEventHandling in comments or somehow else, which make the class harder to use and probably results in a class-usage that was not even tested by you, as you have called ActivateEventHandling and tested it but an uninformed user omitting the activation didn't, and you have certainly not tested your class when ActivateEventHandling was not called, right? :)
Edit: There may be alternative approaches here which are worth mentioning it
Depending on the paradigm it may be wise to avoid wiring events in the class at all. I need to relativize my comment on Stephen Byrne's answer.
Wiring can be regarded as context knowledge. The single responsibility principle says a class should do only one task. Furthermore a class can be used more versatile if it does not have a dependency to something else. A very loosely coupled system would provide many classes witch have events and handlers and do not know other classes.
The environment is then responsible for wiring all the classes together to connect events properly with handlers.
The environment would create the context in which the classes interact with each-other in a meaningful way.
A class in this case does therefore not know to whom it will be bound and it actually does not care. If it requires a value, it asks for it, whom it asks should be unknown to it. In that case there wouldn't even be an interface injected into the ctor to avoid a dependency. This concept is similar to neurons in a brain as they also emit messages to the environment and expect answer not knowing neighbouring neurons.
However I regard a dependency to an interface, if it is injected by some means of a dependency injection container just another paradigm and not less wrong.
The non trivial task of the environment to wire up all classes on start may lead to runtime errors (which are mitigated by a very good test coverage of functional and integration tests, which may be a hard task for large projects) and it gets very annoying if you need to wire dozens of classes and probably hundreds of events on startup manually.
While I agree that wiring in an environment and not in the class itself can be nice, it is not practical for large scale code.
Ralf Westphal (one of the founders of the clean code developer initiative (sorry german only)) has written a software that performs the wiring automatically in a concept called "event based components" (not necessarily coined by himself). It uses naming conventions and signature matching with reflection to bind events and handlers together.
Wiring events is field assignment (because delegates are nothing but simple reference variables that point to methods).
So option(1) is fine.
The point of constructor is not to "assign fields". It is to establish invariants of your object, i. e. something that never changes during its lifetime.
So if in other methods of class you depend on being always subscribed to some object, you'd better do it in the constructor.
On the other hand, if subscriptions come and go (probably not the case here), you can move this code to another method.
The single responsibility principle dictates that that wiring should be avoided. Your class should not care how, or where from it receives data. It would make sense to rename OnEvent method to something more meaningful, and make it public.
Then some other class (bootstrapper, configurator, whatever) should be responsible for the wiring. Your class should only be responsible for what happens when a new data come's in.
Pseudo code:
public interface IEventProvider //your IClassWithEvent
{
Event MyEvent...
}
public class EventResponder : IEventResponder
{
public void OnEvent(object sender, EventArgs args){...}
}
public class Boostrapper
{
public void WireEvent(IEventProvider eventProvider, IEventResponder eventResponder)
{
eventProvider>event += eventResponder.OnEvent;
}
}
Note, the above is pseudo code, and it's only for the purpose to describe the idea.
How your bootstrapper actually is implemented depends on many things. It can be your "main" method, or your global.asax, or whatever you have in place to actually configure and prepare your application.
The idea is, that whatever is responsible to prepare the application to run, should compose it, not the classes themselves, as they should be as single purpose as possible, and should not care too much about how and where they are used.
I want to design a class, which contains a procedure to achieve a goal.
And it must follow some order to make sure the last method, let's say "ExecuteIt", to behave correctly.
in such a case, what design patter will you use ?
which can make sure that the user must call the public method according some ordering.
If you really don't know what I am saying, then can you share me some concept of choosing a design patter, or what will you consider while design a class?
I believe you are looking for the Template Method pattern.
Template Method is what you want. It is one of the oldest, simply a formalization of a way of composing your classes.
http://en.wikipedia.org/wiki/Template_method_pattern
or as in this code sample:
abstract class AbstractParent // this is the template class
{
// this is the template method that enforces an order of method execution
final void executeIt()
{
doBefore(); // << to be implemented by subclasses
doInTheMiddle() // also to be implemented by subclasses
doLast(); // << the one you want to make sure gets executed last
}
abstract void doBefore();
abstract void doInTheMiddle();
final void doLast(){ .... }
}
class SubA extends AbstractParent
{
void doBefore(){ ... does something ...}
void doInTheMiddle(){ ... does something ...}
}
class SubB extends SubA
{
void doBefore(){ ... does something different ...}
}
But it seems you are fishing for an opportunity to use a pattern as opposed to use a pattern to solve a specific type of problem. That will only lead you to bad software development habits.
Don't think about patterns. Think about how you would go around solving that specific problem without having patterns.
Imagine there were no codified patterns (which is how it was before). How would you accomplish what you want to do here (which is what people did to solve this type of problems.) When you can do that, then you will be in a much better position to understand patterns.
Don't use them as cookie cutters. That is the last thing you want to do.
Its basically not a pattern, but: If you want to make sure, the code/methods are executes in a specific order, make the class having only one public method, which then calls the non-public methods in the right sequence.
The simple and pragmatic approach to enforcing a particular sequence of steps in any API is to define a collection of classes (instead of just one class) in such way that every next valid step takes as a parameter an object derived from the previous step, i.e.:
Fuel coal = CoalMine.getCoal();
Cooker stove = new Cooker (gas);
Filling apple = new AppleFilling();
Pie applePie = new Pie(apple);
applePie.bake(stove);
That is to say that to bake a pie you need to supply a Cooker object that in turn requires some sort of a suitable fuel to be instantiated first. Similarly, before you can get an instanse of a Pie you'd need to get some Filling ready.
In this instance the semantics of the API use are explicitly enforced by its syntax. Keep it simple.
I think you have not to really execute nothing, just prepare the statements, resources and whatever you want.
This way whatever would be the order the user invokes the methods the actual execution would be assured to be ordered; simply because you have the total control over the real execution, just before execute it.
IMHO Template Method as very little to do with your goal.
EDIT:
to be more clear. Make your class to have one public method Execute, and a number of other public methods to tell your class what to do (when to do it is a responsibility of you and not of the user); then make a number of private methods doing the real job, they will be invoked in the right order by your Execute, once the user has finished settings things.
Give the user the ability of setting, keep execution for your self. He tells what, you decide how.
Template Method is rational, if you have a class hierarchy and base class defines protected operation steps in its public template method. Could you elaborate your question?
As general concept you should choose a pattern as a standard solution to a standard problem so, I agree with Oded, the "Template Method" seems to fit your needs (but what you explained is too few maybe).
Don´t use pattern as "fetish", what you have to keep in mind is:
How can I figure my problem in a standard way?
There is a pattern for this?
Is this the simplest way?
I am refactoring a class so that the code is testable (using NUnit and RhinoMocks as testing and isolations frameworks) and have found that I have found myself with a method is dependent on another (i.e. it depends on something which is created by that other method). Something like the following:
public class Impersonator
{
private ImpersonationContext _context;
public void Impersonate()
{
...
_context = GetContext();
...
}
public void UndoImpersonation()
{
if (_context != null)
_someDepend.Undo();
}
}
Which means that to test UndoImpersonation, I need to set it up by calling Impersonate (Impersonate already has several unit tests to verify its behaviour). This smells bad to me but in some sense it makes sense from the point of view of the code that calls into this class:
public void ExerciseClassToTest(Impersonator c)
{
try
{
if (NeedImpersonation())
{
c.Impersonate();
}
...
}
finally
{
c.UndoImpersonation();
}
}
I wouldn't have worked this out if I didn't try to write a unit test for UndoImpersonation and found myself having to set up the test by calling the other public method. So, is this a bad smell and if so how can I work around it?
Code smell has got to be one of the most vague terms I have ever encountered in the programming world. For a group of people that pride themselves on engineering principles, it ranks right up there in terms of unmeasurable rubbish, and about as useless a measure, as LOCs per day for programmer efficiency.
Anyway, that's my rant, thanks for listening :-)
To answer your specific question, I don't believe this is a problem. If you test something that has pre-conditions, you need to ensure the pre-conditions have been set up first for the given test case.
One of the tests should be what happens when you call it without first setting up the pre-conditions - it should either fail gracefully or set up it's own pre-condition if the caller hasn't bothered to do so.
Well, there is a bit too little context to tell, it looks like _someDepend should be initalized in the constructor.
Initializing fields in an instance method is a big NO for me. A class should be fully usable (i.e. all methods work) as soon as it is constructed; so the constructor(s) should initialize all instance variables. See e.g. the page on single step construction in Ward Cunningham's wiki.
The reason initializing fields in an instance method is bad is mainly that it imposes an implicit ordering on how you can call methods. In your case, TheMethodIWantToTest will do different things depending on whether DoStuff was called first. This is generally not something a user of your class would expect, so it's bad :-(.
That said, sometimes this kind of coupling may be unavoidable (e.g. if one method acquires a resource such as a file handle, and another method is needed to release it). But even that should be handled within one method if possible.
What applies to your case is hard to tell without more context.
Provided you don't consider mutable objects a code smell by themselves, having to put an object into the state needed for a test is simply part of the set-up for that test.
This is often unavoidable, for instance when working with remote connections - you have to call Open() before you can call Close(), and you don't want Open() to automatically happen in the constructor.
However you want to be very careful when doing this that the pattern is something readily understood - for instance I think most users accept this kind of behaviour for anything transactional, but might be surprised when they encounter DoStuff() and TheMethodIWantToTest() (whatever they're really called).
It's normally best practice to have a property that represents the current state - again look at remote or DB connections for an example of a consistently understood design.
The big no-no is for this to ever happen for properties. Properties should never care what order they are called in. If you have a simple value that does depend on the order of methods then it should be a parameterless method instead of a property-get.
Yes, I think there is a code smell in this case. Not because of dependencies between methods, but because of the vague identity of the object. Rather than having an Impersonator which can be in different persona states, why not have an immutable Persona?
If you need a different Persona, just create a new one rather than changing the state of an existing object. If you need to do some cleanup afterwards, make Persona disposable. You can keep the Impersonator class as a factory:
using (var persona = impersonator.createPersona(...))
{
// do something with the persona
}
To answer the title: having methods call each other (chaining) is unavoidable in object oriented programming, so in my view there is nothing wrong with testing a method that calls another. A unit test can be a class after all, it's a "unit" you're testing.
The level of chaining depends on the design of your object - you can either fork or cascade.
Forking:
classToTest1.SomeDependency.DoSomething()
Cascading:
classToTest1.DoSomething() (which internally would call SomeDependency.DoSomething)
But as others have mentioned, definitely keep your state initialisation in the constructor which from what I can tell, will probably solve your issue.
When writing GUI apps I use a top level class that "controls" or "coordinates" the application. The top level class would be responsible for coordinating things like initialising network connections, handling application wide UI actions, loading configuration files etc.
At certain stages in the GUI app control is handed off to a different class, for example the main control swaps from the login screen to the data entry screen once the user authenticates. The different classes need to use functionality of objects owned by the top level control. In the past I would simply pass the objects to the subordinate controls or create an interface. Lately I have changed to passing method delegates instead of whole objects with the two main reasons being:
It's a lot easier to mock a method than a class when unit testing,
It makes the code more readable by documenting in the class constructor exactly which methods subordinate classes are using.
Some simplified example code is below:
delegate bool LoginDelegate(string username, string password);
delegate void UpdateDataDelegate(BizData data);
delegate void PrintDataDelegate(BizData data);
class MainScreen {
private MyNetwork m_network;
private MyPrinter m_printer;
private LoginScreen m_loginScreen;
private DataEntryScreen m_dataEntryScreen;
public MainScreen() {
m_network = new Network();
m_printer = new Printer();
m_loginScreen = new LoginScreen(m_network.Login);
m_dataEntryScreen = new DataEntryScreen(m_network.Update, m_printer.Print);
}
}
class LoginScreen {
LoginDelegate Login_External;
public LoginScreen(LoginDelegate login) {
Login_External = login
}
}
class DataEntryScreen {
UpdateDataDelegate UpdateData_External;
PrintDataDelegate PrintData_External;
public DataEntryScreen(UpdateDataDelegate updateData, PrintDataDelegate printData) {
UpdateData_External = updateData;
PrintData_External = printData;
}
}
My question is that while I prefer this approach and it makes good sense to me how is the next developer that comes along going to find it? In sample and open source C# code interfaces are the preferred approach for decoupling whereas this approach of using delegates leans more towards functional programming. Am I likely to get the subsequent developers swearing under their breath for what is to them a counter-intuitive approach?
It's an interesting approach. You may want to pay attention to two things:
Like Philip mentioned, when you have a lot of methods to define, you will end up with a big constructor. This will cause deep coupling between classes. One more or one less delegate will require everyone to modify the signature. You should consider making them public properties and using some DI framework.
Breaking down the implementation to the method level can be too granular sometimes. With class/interface, you can group methods by the domain/functionality. If you replace them with delegates, they can be mixed up and become difficult to read/maintain.
It seems the number of delegates is an important factor here.
While I can certainly see the positive side of using delegates rather than an interface, I have to disagree with both of your bullet points:
"It's a lot easier to mock a method than a class when unit testing". Most mock frameworks for c# are built around the idea of mocking a type. While many can mock methods, the samples and documentation (and focus) are normally around types. Mocking an interface with one method is just as easy or easier to mock than a method.
"It makes the code more readable by documenting in the class constructor exactly which methods subordinate classes are using." Also has it's cons - once a class needs multiple methods, the constructors get large; and once a subordinate class needs a new property or method, rather than just modifying the interface you must also add it to allthe class constructors up the chain.
I'm not saying this is a bad approach by any means - passing functions rather than types does clearly state what you are doing and can reduce your object model complexity. However, in c# your next developer will probably see this as odd or confusing (depending on skill level). Mixing bits of OO and Functional approaches will probably get a raised eyebrow at the very least from most developers you will work with.
I am running into a design disagreement with a co-worker and would like people's opinion on object constructor design. In brief, which object construction method would you prefer and why?
public class myClass
{
Application m_App;
public myClass(ApplicationObject app)
{
m_App = app;
}
public method DoSomething
{
m_App.Method1();
m_App.Object.Method();
}
}
Or
public class myClass
{
Object m_someObject;
Object2 m_someOtherObject;
public myClass(Object instance, Object2 instance2)
{
m_someObject = instance;
m_someOtherObject = instance2;
}
public method DoSomething
{
m_someObject.Method();
m_someOtherObject.Method();
}
}
The back story is that I ran into what appears to be a fundamentally different view on constructing objects today. Currently, objects are constructed using an Application class which contains all of the current settings for the application (Event log destination, database strings, etc...) So the constructor for every object looks like:
public Object(Application)
Many classes hold the reference to this Application class individually. Inside each class, the values of the application are referenced as needed. E.g.
Application.ConfigurationStrings.String1 or Application.ConfigSettings.EventLog.Destination
Initially I thought you could use both methods. The problem is that in the bottom of the call stack you call the parameterized constructor then, higher up the stack, when the new object expects a reference to the application object to be there, we ran into a lot of null reference errors and saw the design flaw.
My feeling on using an application object to set every class is that it breaks encapsulation of each object and allows the Application class to become a god class which holds information for everything. I run into problems when thinking of the downsides to this method.
I wanted to change the objects constructor to accept only the arguments it needs so that public object(Application) would change to public object(classmember1, classmember2 etc...). I feel currently that this makes it more testable, isolates change, and doesn't obfuscate the necessary parameters to pass.
Currently, another programmer does not see the difference and I am having trouble finding examples or good reasons to change the design, and saying it's my instinct and just goes against the OO principles I know is not a compelling argument. Am I off base in my design thoughts? Does anyone have any points to add in favor of one or the other?
Hell, why not just make one giant class called "Do" and one method on it called "It" and pass the whole universe into the It method?
Do.It(universe)
Keep things as small as possible. Discrete means easier to debug when things inevitably break.
My view is that you give the class the smallest set of "stuff" it needs for it to do its job. The "Application" method is easier upfront but as you've seen already, it will lead to maintainence issues.
I thing Steve McConnel put it very succintly. He states,
"The difference between the
'convenience' philosophy and the
'intellectual manageability'
philosophy boils down to a difference
in emphasis between writing programs
and reading them. Maximizing scope
may indeed make programs easy to
write, but a program in which any
routine can use any variable at any
time is harder to understand than a
program that uses well-factored
routines. In such a program you can't
understand only one routine; you have
to understand all the other routines
with which that routine shares global
data. Such programs are hard to read,
hard to debug, and hard to modify." [McConnell 2004]
I wouldn't go so far as to call the Application object a "god" class; it really seems like a utility class. Is there a reason it isn't a public static class (or, better yet, a set of classes) that the other classes can use at will?