Making my class 'fluent' - c#

I discovered yesterday that I can simulate a fluent interface if I return the class instance from each method like this...
public class IsThisFluent
{
public IsThisFluent Stuff()
{
//...
return this;
}
public IsThisFluent OtherStuff()
{
// ...
return this;
}
}
Is this all there is to it?
I admit, I'm a bear of very little brain and I want to carry on this this but I thought it might be best to check with a grown up.
Am I missing something?
Is there a 'gotcha' that I haven't spotted with this pattern?

That's pretty much it.
Here's a really good article on it:
http://rrpblog.azurewebsites.net/?p=33
EDIT
The original site seems to have died, so here's WayBackMachine to the rescue
I also really like this example from this answer:
https://stackoverflow.com/a/1795027/131809
public class Coffee
{
private bool _cream;
public Coffee Make { get new Coffee(); }
public Coffee WithCream()
{
_cream = true;
return this;
}
public Coffee WithOuncesToServe(int ounces)
{
_ounces = ounces;
return this;
}
}
var myMorningCoffee = Coffee.Make.WithCream().WithOuncesToServe(16);
Which reminds me, I need a coffee now.

return this is not all there is to fluent interfaces.
Chaining methods is a simplistic form of building a fluent API, but fluent APIs generally look like DSLs (domain specific languages) and are much, much harder to design.
Take Moq as an example:
new Mock<IInterface>()
.Setup(x => x.Method())
.CallBack<IInterface>(Console.WriteLine)
.Returns(someValue);
The Setup method, defined on the type Mock<T>, returns an instance of ISetup<T, TResult>.
The Callback method, defined for ICallback<TMock, TResult> returns an instance of IReturnsThrows<TMock,TResult>. Note that ISetup<T, TResult> extends IReturnsThrows<TMock,TResult>.
Finally, Returns is defined on IReturns<TMock,TResult> and returns IReturnsResult<TMock>. Also note that IReturnsThrows<TMock,TResult> extends IReturnsResult<TMock>.
All these little nuances are there to force you to call these methods in a particular order, and to forbid you from calling Setup twice in a row, for example. Or from calling Returns before you call Setup.
These details are very important to ensure a good user experience.
To read more on designing fluent interfaces, take a look at Martin Fowler's article on FluentInterface. FluentAssertions is another prime example of how complex the design might get - but also of how much more readable the outcome is.

Nope, that's pretty much it.
The idea behind is that you can chain method calls together manipulating internal state as you go. Ultimately the main goal of a Fluent interface is readability, LINQ being a very good example of one.

Related

Property initialisation anti-pattern

Now and again I end up with code along these lines, where I create some objects then loop through them to initialise some properties using another class...
ThingRepository thingRepos = new ThingRepository();
GizmoProcessor gizmoProcessor = new GizmoProcessor();
WidgetProcessor widgetProcessor = new WidgetProcessor();
public List<Thing> GetThings(DateTime date)
{
List<Thing> allThings = thingRepos.FetchThings();
// Loops through setting thing.Gizmo to a new Gizmo
gizmoProcessor.AddGizmosToThings(allThings);
// Loops through setting thing.Widget to a new Widget
widgetProcessor.AddWidgetsToThings(allThings);
return allThings;
}
...which just, well, feels wrong.
Is this a bad idea?
Is there a name of an anti-pattern that I'm using here?
What are the alternatives?
Edit: assume that both GizmoProcessor and WidgetProcessor have to go off and do some calculation, and get some extra data from other tables. They're not just data stored in a repository. They're creating new Gizmos and Widgets based on each Thing and assigning them to Thing's properties.
The reason this feels odd to me is that Thing isn't an autonomous object; it can't create itself and child objects. It's requiring higher-up code to create a fully finished object. I'm not sure if that's a bad thing or not!
ThingRepository is supposed to be the single access point to get collections of Thing's, or at least that's where developers will intuitively look. For that reason, it feels strange that GetThings(DateTime date) should be provided by another object. I'd rather place that method in ThingRepository itself.
The fact that the Thing's returned by GetThings(DateTime date) are different, "fatter" animals than those returned by ThingRepository.FetchThings() also feels awkward and counter-intuitive. If Gizmo and Widget are really part of the Thing entity, you should be able to access them every time you have an instance of Thing, not just for instances returned by GetThings(DateTime date).
If the Date parameter in GetThings() isn't important or could be gathered at another time, I would use calculated properties on Thing to implement on-demand access to Gizmo and Widget :
public class Thing
{
//...
public Gizmo Gizmo
{
get
{
// calculations here
}
}
public Widget Widget
{
get
{
// calculations here
}
}
}
Note that this approach is valid as long as the calculations performed are not too costly. Calculated properties with expensive processing are not recommended - see http://msdn.microsoft.com/en-us/library/bzwdh01d%28VS.71%29.aspx#cpconpropertyusageguidelinesanchor1
However, these calculations don't have to be implemented inline in the getters - they can be delegated to third-party Gizmo/Widget processors, potentially with a caching strategy, etc.
If you have complex intialization then you could use a Strategy pattern. Here is a quick overview adapted from this strategy pattern overview
Create a strategy interface to abstract the intialization
public interface IThingInitializationStrategy
{
void Initialize(Thing thing);
}
The initialization implementation that can be used by the strategy
public class GizmosInitialization
{
public void Initialize(Thing thing)
{
// Add gizmos here and other initialization
}
}
public class WidgetsInitialization
{
public void Initialize(Thing thing)
{
// Add widgets here and other initialization
}
}
And finally a service class that accepts the strategy implementation in an abstract way
internal class ThingInitalizationService
{
private readonly IThingInitializationStrategy _initStrategy;
public ThingInitalizationService(IThingInitializationStrategy initStrategy)
{
_initStrategy = initStrategy;
}
public Initialize(Thing thing)
{
_initStrategy.Initialize(thing);
}
}
You can then use the initialization strategies like so
var initializationStrategy = new GizmosInitializtion();
var initializationService = new ThingInitalizationService(initializationStrategy);
List<Thing> allThings = thingRepos.FetchThings();
allThings.Foreach ( thing => initializationService.Initialize(thing) );
Tho only real potential problem would be that you're iterating over the same loop multiple times, but if you need to hit a database to get all the gizmos and widgets then it might be more efficient to request them in batches so passing the full list to your Add... methods would make sense.
The other option would be to look into returning the gizmos and widgets with the thing in the first repository call (assuming they reside in the same repo). It might make the query more complex, but it would probably be more efficient. Unless of course you don't ALWAYS need to get gizmos and widgets when you fetch things.
To answer your questions:
Is this a bad idea?
From my experience, you rarely know if it's a good/bad idea until you need to change it.
IMO, code is either: Over-engineered, under-engineered, or unreadable
In the meantime, you do your best and stick to the best practices (KISS, single responsibility, etc)
Personally, I don't think the processor classes should be modifying the state of any Thing.
I also don't think the processor classes should be given a collection of Things to modify.
Is there a name of an anti-pattern that I'm using here?
Sorry, unable to help.
What are the alternatives?
Personally, I would write the code as such:
public List<Thing> GetThings(DateTime date)
{
List<Thing> allThings = thingRepos.FetchThings();
// Build the gizmo and widget for each thing
foreach (var thing in allThings)
{
thing.Gizmo = gizmoProcessor.BuildGizmo(thing);
thing.Widget = widgetProcessor.BuildWidget(thing);
}
return allThings;
}
My reasons being:
The code is in a class that "Gets things". So logically, I think it's acceptable for it to traverse each Thing object and initialise them.
The intention is clear: I'm initialising the properties for each Thing before returning them.
I prefer initialising any properties of Thing in a central location.
I don't think that gizmoProcessor and widgetProcessor classes should have any business with a Collection of Things
I prefer the Processors to have a method to build and return a single widget/gizmo
However, if your processor classes are building several properties at once, then only would I refactor the property initialisation to each processor.
public List<Thing> GetThings(DateTime date)
{
List<Thing> allThings = thingRepos.FetchThings();
// Build the gizmo and widget for each thing
foreach (var thing in allThings)
{
// [Edited]
// Notice a trend here: The common Initialize(Thing) interface
// Could probably be refactored into some
// super-mega-complex Composite Builder-esque class should you ever want to
gizmoProcessor.Initialize(thing);
widgetProcessor.Initialize(thing);
}
return allThings;
}
P.s.:
I personally do not care that much for (Anti)Pattern names.
While it helps to discuss a problem at a higher level of abstraction, I wouldn't commit every (anti)pattern names to memory.
When I come across a Pattern that I believe is helpful, then only do I remember it.
I'm quite lazy, and my rationale is that: Why bother remembering every pattern and anti pattern if I'm only going to use a handful?
[Edit]
Noticed an answer was already given regarding using a Strategy Service.

Is there a way to protect Unit test names that follows MethodName_Condition_ExpectedBehaviour pattern against refactoring?

I follow the naming convention of
MethodName_Condition_ExpectedBehaviour
when it comes to naming my unit-tests that test specific methods.
for example:
[TestMethod]
public void GetCity_TakesParidId_ReturnsParis(){...}
But when I need to rename the method under test, tools like ReSharper does not offer me to rename those tests.
Is there a way to prevent such cases to appear after renaming? Like changing ReSharper settings or following a better unit-test naming convention etc. ?
A recent pattern is to groups tests into inner classes by the method they test.
For example (omitting test attributes):
public CityGetterTests
{
public class GetCity
{
public void TakesParidId_ReturnsParis()
{
//...
}
// More GetCity tests
}
}
See Structuring Unit Tests from Phil Haack's blog for details.
The neat thing about this layout is that, when the method name changes,
you'll only have to change the name of the inner class instead of all
the individual tests.
I also started with this convertion, however ended up with feeling that is not very good. Now I use BDD styled names like should_return_Paris_for_ParisID.
That makes my tests more readable and alsow allows me to refactor method names without worrying about my tests :)
I think the key here is what you should be testing.
You've mentioned TDD in the tags, so I hope that we're trying to adhere to that here. By that paradigm, the tests you're writing have two purposes:
To support your code once it is written, so you can refactor without fearing that you've broken something
To guide us to a better way of designing components - writing the test first really forces you to think about what is necessary for solving the problem at hand.
I know at first it looks like this question is about the first point, but really I think it's about the second. The problem you're having is that you've got concrete components you're testing instead of a contract.
In code terms, that means that I think we should be testing interfaces instead of class methods, because otherwise we expose our test to a variety of problems associated with testing components instead of contracts - inheritance strategies, object construction, and here, renaming.
It's true that interfaces names will change as well, but they'll be a lot more rigid than method names. What TDD gives us here isn't just a way to support change through a test harness - it provides the insight to realise we might be going about it the wrong way!
Take for example the code block you gave:
[TestMethod]
public void GetCity_TakesParidId_ReturnsParis(){...}
{
// some test logic here
}
And let's say we're testing the method GetCity() on our object, CityObtainer - when did I set this object up? Why have I done so? If I realise GetMatchingCity() is a better name, then you have the problem outlined above!
The solution I'm proposing is that we think about what this method really means earlier in the process, by use of interfaces:
public interface ICityObtainer
{
public City GetMatchingCity();
}
By writing in this "outside-in" style way, we're forced to think about what we want from the object a lot earlier in the process, and it becoming the focus should reduce its volatility. This doesn't eliminate your problem, but it may mitigate it somewhat (and, I think, it's a better approach anyway).
Ideally, we go a step further, and we don't even write any code before starting the test:
[TestMethod]
public void GetCity_TakesParId_ReturnsParis
{
ICityObtainer cityObtainer = new CityObtainer();
var result = cityObtainer.GetCity("paris");
Assert.That(result.Name, Is.EqualTo("paris");
}
This way, I can see what I really want from the component before I even start writing it - if GetCity() isn't really what I want, but rather GetCityByID(), it would become apparent a lot earlier in the process. As I said above, it isn't foolproof, but it might reduce the pain for this particular case a bit.
Once you've gone through that, I feel that if you're changing the name of the method, it's because you're changing the terms of the contract, and that means you should have to go back and reconsider the test (since it's possible you didn't want to change it).
(As a quick addendum, if we're writing a test with TDD in mind, then something is happening inside GetCity() that has a significant amount of logic going on. Thinking about the test as being to a contract helps us to separate the intention from the implementation - the test will stay valid no matter what we change behind the interface!)
I'm late, but maybe that Can be still useful. That's my solution (Assuming you are using XUnit at least).
First create an attribute FactFor that extends the XUnit Fact.
public class FactForAttribute : FactAttribute
{
public FactForAttribute(string methodName = "Constructor", [CallerMemberName] string testMethodName = "")
=> DisplayName = $"{methodName}_{testMethodName}";
}
The trick now is to use the nameof operator to make refactoring possible. For example:
public class A
{
public int Just2() => 2;
}
public class ATests
{
[FactFor(nameof(A.Just2))]
public void Should_Return2()
{
var a = new A();
a.Just2().Should().Be(2);
}
}
That's the result:

Would creating a method "template" (by passing some method to the template method) be considered bad design?

I had a difficult time determining a good title, so feel free to change it if necessary. I wasn't really sure how to describe what I'm trying to achieve and the word "template" came to mind (obviously I'm not trying to use C++ templates).
If I have a class that performs some action in every method, let's pretending doing a try/catch and some other stuff:
public class SomeService
{
public bool Create(Entity entity)
{
try
{
this.repository.Add(entity);
this.repository.Save();
return true;
}
catch (Exception e)
{
return false;
}
}
}
Then I add another method:
public bool Delete(Entity entity)
{
try
{
this.repository.Remove(entity);
this.repository.Save();
return true;
}
catch (Exception e)
{
return false;
}
}
There's obviously a pattern in the methods here: try/catch with the return values. So I was thinking that since all methods on the service need to implement this pattern of working, could I refactor it into something like this instead:
public class SomeService
{
public bool Delete(Entity entity)
{
return this.ServiceRequest(() =>
{
this.repository.Remove(entity);
this.repository.Save();
});
}
public bool Create(Entity entity)
{
return this.ServiceRequest(() =>
{
this.repository.Add(entity);
this.repository.Save();
});
}
protected bool ServiceRequest(Action action)
{
try
{
action();
return true;
}
catch (Exception e)
{
return false;
}
}
}
This way all methods follow the same "template" for execution. Is this a bad design? Remember, the try/catch isn't all that could happen for each method. Think of adding validation, there would be the need to say if(!this.Validate(entity))... in each method.
Is this too difficult to maintain/ugly/bad design?
Using lambda expressions usually reduces readability. Which basically means that in a few months someone will read this and get a headache.
If it's not necessary, or there's no real performance benefit, just use the 2 separate functions. IMO it's better to have readable code then to use nifty techniques.
This seems like a technique that would be limited to only small "actions" -- the more code in the "action" the less useful this would be as readability would be more and more compromised. In fact, the only thing you're really reusing here is the try/catch block which is arguably bad design in the first place.
That's not to say that it's necessarily a bad design pattern, just that your example doesn't really seem to be a good fit for it. LINQ, for example, uses this pattern extensively. In combination with extension methods and the fluent style it can be very handy and still remain readable. Again, though, it seems best suited to replace small "actions" -- anything more than a couple of lines of code and I think it gets pretty messy.
If you are going to do it you might want to make it more useful by passing in both the action and the entity the action uses as parameters instead of just the action. That would make it more likely that you could do additional, common computations in your action.
public bool Delete( Entity entity )
{
return this.ServiceRequest( e => {
this.repository.Remove( e );
this.repository.Save();
}, entity );
}
protected bool ServiceRequest( Action<Entity> action, Entity entity )
{
try
{
this.Validate( entity );
action( entity );
return true;
}
catch (SqlException) // only catch specific exceptions
{
return false;
}
}
I would try to look for a way to break the repository action (add/update/delete) from the repository changes flush (save).
Depending on how you use your code (web/windows) you might be able to use a 'session' manager for this. Having this separation will also allow you to have multiple actions flushed in a single transaction.
Other think, not related to the topic but to the code, don't return true/false, but let exception pass through or return something that will allow you to distinguish on the cause or failure (validation, etc.). You might want to throw on contract breach (invalid data passed) and return value on normal invalid business rules (to not use exception as a business rule as they are slow).
An alternative would be to create an interface, say IExample which expresses more of your intent and decouples the action from the actor. So in your example at the end you mention perhaps using this.Validate(entity). Clearly that won't work with your current design as you'd have to pass action and entity and then pass entity to action.
If you express it as an interface on entity you simply pass any entity that implements IExample.
public interface IExample
{
bool Validate();
void Action();
}
Now entity implements IExample, ServiceRequest now takes an IExample as its parameter and bob's your uncle.
Your original design isn't bad at all, it's perfectly common actually, but becomes restrictive as requirements change (this time action has to be called twice, this time it needs to call validation and then postvalidation). By expressing it through an interface you make it testable and the pattern can be moved to a helper class designed to replay this particular sequence. Once extracted it also becomes testable. It also means if the requirements suddenly require postvalidation to be called you can revisit the entire design.
Finally, if the pattern isn't being applied a lot, for example you have perhaps three places in the class, then it might not be worth the effort, just code each long hand. Interestingly, because of the way things are Jitted it might make it faster to have three distinct and complete methods rather than three that all share a common core...

C#: using type of "self" as generic parameter?

This may seem a bit odd, but I really need to create a workaround for the very complicated duplex - communication - handling in C#, especially to force other developers to observe the DRY - principle.
So what I'm doing is to have a type based multiton that looks like this:
internal sealed class SessionManager<T> where T : DuplexServiceBase
which is no problem at all - so far.
However, as soon as I want to have the services (I'm going with one instance per session) register themselves with the SessionManager, the hassle starts:
internal abstract class DuplexServiceBase : MessageDispatcherBase<Action>
(MessageDispatcherBase being a class of mine that creates a thread and asynchronously sends messages).
I want to have a method that looks like this:
protected void ProcessInboundMessage()
{
// Connect
SessionManager<self>.Current.Connect(this);
}
...but the problem is - how would I get to the "self"?
I really NEED separate session managers for each service class, because they all have their own notifications (basically it's the very annoying "NotifyAllClients" - method that makes we want to pull my own hair out for the last hours) and need to be treated separately.
Do you have ANY ideas?
I don't want to use "AsyncPattern = true", btw... this would require me to give up type safety, enforced contract compliance (this would lead to very bad abuse of the communication system I'm setting up here) and would require abandoning the DRY - principle, there would be a lot of repetitive code all over the place, and this is something I seriously frown upon.
Edit:
I have found the best possible solution, thanks to the answers here - it's an EXTENSION METHOD, hehe...
public static SessionManager<T> GetSessionManager<T>(this T sessionObject)
where T : DuplexServiceBase
{
return SessionManager<T>.Current;
}
I can use this like this:
GetSessionManager().Connect(this);
Mission accomplished. :-D
This method (belongs to DuplexServiceBase) gives me the session manager I want to work with. Perfect! :-)
I'd write a helper method:
static class SessionManager { // non-generic!
static void Connect<T>(T item) where T : DuplexServiceBase {
SessionManager<T>.Current.Connect(item);
}
}
and use SessionManager.Connect(this) which will figure it out automatically via generic type inference.
You could wrap the call in a generic method, thereby taking advantage of the compiler's type inference:
private static void ConnectSessionManager<T>(T service)
{
SessionManager<T>.Current.Connect(service)
}
protected void ProcessInboundMessage()
{
// Connect
ConnectSessionManager(this);
}

TDD approach for complex function

I have a method in a class for which they are a few different outcomes (based upon event responses etc). But this is a single atomic function which is to used by other applications.
I have broken down the main blocks of the functionality that comprise this function into different functions and successfully taken a Test Driven Development approach to the functionality of each of these elements. These elements however aren't exposed for other applications would use.
And so my question is how can/should i easily approach a TDD style solution to verifying that the single method that should be called does function correctly without a lot of duplication in testing or lots of setup required for each test?
I have considered / looked at moving the blocks of functionality into a different class and use Mocking to simulate the responses of the functions used but it doesn't feel right and the individual methods need to write to variables within the main class (it felt really heath robinson).
The code roughly looks like this (i have removed a lot of parameters to make things clearer along with a fair bit of irrelevant code).
public void MethodToTest(string parameter)
{
IResponse x = null;
if (function1(parameter))
{
if (!function2(parameter,out x))
{
function3(parameter, out x);
}
}
// ...
// more bits of code here
// ...
if (x != null)
{
x.Success();
}
}
I think you would make your life easier by avoiding the out keyword, and re-writing the code so that the functions either check some condition on the response, OR modify the response, but not both. Something like:
public void MethodToTest(string parameter)
{
IResponse x = null;
if (function1(parameter))
{
if (!function2Check(parameter, x))
{
x = function2Transform(parameter, x);
x = function3(parameter, x);
}
}
// ...
// more bits of code here
// ...
if (x != null)
{
x.Success();
}
}
That way you can start pulling apart and recombining the pieces of your large method more easily, and in the end you should have something like:
public void MethodToTest(string parameter)
{
IResponse x = ResponseBuilder.BuildResponse(parameter);
if (x != null)
{
x.Success();
}
}
... where BuildResponse is where all your current tests will be, and the test for MethodToTest should now be fairly easy to mock the ResponseBuilder.
Your best option would indeed be mocking function1,2,3 etc. If you cannot move your functions to a separate class you could look into using nested classes to move the functions to, they are able to access the data in the outer class. After that you should be able to use mocks instead of the nested classes for testing purposes.
Update: From looking at your example code I think you could get some inspiration by looking into the visitor pattern and ways of testing that, it might be appropriate.
In this case I think you would just mock the method calls as you mentioned.
Typically you would write your test first, and then write the method in a way so that all of the tests pass. I've noticed that when you do it this way, the code that's written is very clean and to the point. Also, each class is very good about only having a single responsibility that can easily be tested.
I don't know what's wrong, but something doesn't smell right, and I think there maybe a more elegant way to do what you're doing.
IMHO, you have a couple options here:
Break the inner functions out into a different class so you can mock them and verify that they are called. (which you already mentioned)
It sounds like the other methods you created are private methods, and that this is the only public interface into those methods. If so, you should be running those test cases through this function, and verifying the results (you said that those private methods modify variables of the class) instead of testing private methods. If that is too painful, then I would consider reworking your design.
It looks to me like this class is trying to do more than one thing. For example, the first function doesn't return a response but the other two do. In your description you said the function is complex and takes a lot of parameters. Those are both signs that you need to refactor your design.

Categories

Resources