Should factories set model properties? - c#

As part of an overall S.O.L.I.D. programming effort I created a factory interface & an abstract factory within a base framework API.
People are already starting to overload the factories Create method. The problem is people are overloading the Create method with model properties (and thereby expecting the factory to populate them).
In my opinion, property setting should not be done by the factory. Am I wrong?
public interface IFactory
{
I Create<C, I>();
I Create<C, I>(long id); //<--- I feel doing this is incorrect
IFactoryTransformer Transformer { get; }
IFactoryDataAccessor DataAccessor { get; }
IFactoryValidator Validator { get; }
}
UPDATE - For those unfamiliar with SOLID principles, here are a few of them:
Single Responsibility Principle
It states that every object should have a single responsibility, and that responsibility should be entirely encapsulated by the class
Open/Closed Principle
The meaning of this principle is that when a get a request for a feature that needs to be added to your application, you should be able to handle it without modifying old classes, only by adding subclasses and new implementations.
Dependency Inversion Principle
It says that you should decouple your software modules. To achieve that you’d need to isolate dependencies.
Overall:
I'm 90% sure I know the answer. However, I would like some good discussion from people already using SOLID. Thank you for your valuable opinions.
UPDATE - So what do I think a a SOLID factory should do?
IMHO a SOLID factory serves-up appropriate object-instances...but does so in a manner that hides the complexity of object-instantiation. For example, if you have an Employee model...you would ask the factory to get you the appropriate one. The DataAccessorFactory would give you the correct data-access object, the ValidatorFactory would give you the correct validation object etc.
For example:
var employee = Factory.Create<ExxonMobilEmployee, IEmployee>();
var dataAccessorLdap = Factory.DataAccessor.Create<LDAP, IEmployee>();
var dataAccessorSqlServer = Factory.DataAccessor.Create<SqlServer, IEmployee>();
var validator = Factory.Validator.Create<ExxonMobilEmployee, IEmployee>();
Taking the example further we would...
var audit = new Framework.Audit(); // Or have the factory hand it to you
var result = new Framework.Result(); // Or have the factory hand it to you
// Save your AuditInfo
audit.username = 'prisonerzero';
// Get from LDAP (example only)
employee.Id = 10;
result = dataAccessorLdap.Get(employee, audit);
employee = result.Instance; // All operations use the same Result object
// Update model
employee.FirstName = 'Scooby'
employee.LastName = 'Doo'
// Validate
result = validator.Validate(employee);
// Save to SQL
if(result.HasErrors)
dataAccessorSqlServer.Add(employee, audit);
UPDATE - So why am I adamant about this separation?
I feel segregating responsibilities makes for smaller objects, smaller Unit Tests and it enhances reliability & maintenance. I recognize it does so at the cost of creating more objects...but that is what the SOLID Factory protects me from...it hides the complexity of gathering and instantiating said objects.

I'd say it's sticking to DRY principle, and as long as it's simple values wiring I don't see it being problem/violation. Instead of having
var model = this.factory.Create();
model.Id = 10;
model.Name = "X20";
scattered all around your code base, it's almost always better to have it in one place. Future contract changes, refactorings or new requirements will be much easier to handle.
It's worth noting that if such object creation and then immediately properties setting is common, then that's a pattern your team has evolved and developers adding overloads is only a response to this fact (notably, a good one). Introducing an API to simplify this process is what should be done.
And again, if it narrows down to simple assignments (like in your example) I wouldn't hesitate to keep the overloads, especially if it's something you notice often. When things get more complicated, it would be a sign of new patterns being discovered and perhaps then you should resort to other standard solutions (like the builder pattern for example).

Assuming that your factory interface is used from application code (as opposed to infrastructural Composition Root), it actually represents a Service Locator, which can be considered an anti-pattern with respect to Dependency Injection. See also Service Locator: roles vs. mechanics.
Note that code like the following:
var employee = Factory.Create<ExxonMobilEmployee, IEmployee>();
is just syntax sugar. It doesn't remove dependency on concrete ExxonMobilEmployee implementation.
You might also be interested in Weakly-typed versus Strongly-typed Message Channels and Message Dispatching without Service Location (those illustrate how such interfaces violate the SRP) and other publications by Mark Seemann.

After about 6 months of experience in Dependency Injection, I've only discovered few cases where factories should set properties:
If the setter is marked as internal, and the properties are expected to be set once by the factory only. This usually happens on interfaces with only getter properties whose implementations are expected to be created thru a factory.
When the model uses property injection. I rarely see classes that use property injection (I also try to avoid building these), but when I do see one, and the needed service is only available elsewhere, it's a case where you have no choice.
For the bottom line, leave public setters out of factories. Only set properties that are marked as internal Let the clients decide on what properties they need to set if they are allowed to do so. This will keep your factories clean of unneeded functions.

Related

Implementing a domain service with DDD in C#

I'm working on a domain model writing my software all DDD and stuff doing a great job, when I suddenly bump into the same problem I have been facing over and over again and now it's time to share some insights. The root of the problem lies in the uniqueness of data.
For example, let's say we're writing this awesome domain model for a user. Obviously the username is unique and just to be as flexible as possible we want the user to be able to change his name, so I implemented the following method:
public void SetUsername(string value)
{
if (string.IsNullOrWhiteSpace(value))
{
throw new UserException(UserErrorCode.UsernameNullOrEmpty,
"The username cannot be null or empty");
}
if (!Regex.IsMatch(value, RegularExpressions.Username))
{
throw new UserException(UserErrorCode.InvalidUsername,
"The username {value} does not meet the required ");
}
if (!Equals(Username, value))
{
Username = value;
SetState(TrackingState.Modified);
}
}
Again, this is all fine and fancy, but this function lacks the ability to check if the username is unique or not. So writing all these nice articles about DDD, this would be a nice use-case for a Domain Service. Ideally, I would inject that service using dependency injection but this ruins the constructor of my domain model. Alternatively, I can demand an instance of a domain service as a function argument like so: public void SetUsername(string value, IUsersDomainService service) and to be honest I don't see any solid alternatives.
Who has faced this problem and maybe came up with a nice rock-solid solution?
I agree with #TomTom. But as most times with software decisions, it depends, there is almost always a tradeoff. As a rule of thumb, you gain more by not injecting a domain service into an entity. This is a common question when one is starting with DDD and CQRS+ES. And has been thoroughly discussed in the CQRS mailing list here
However, there are some cases where the approach you suggested (known as method injection) might be beneficial it depends on the scenario. I’ll try and drive some analysis points next.
Consider the case where you want to make some validation before creating an entity. Let's think of a hypothetical and way oversimplified international banking context, with the following entity:
public class BankNote
{
private BankNote() {}
public static FromCurrency(
Currency currency,
ISupportedCurrencyService currencyService)
{
currencyService.IsAvailable(currency);
}
}
I am using the factory method pattern FromCurrency inside your entity to abstract the entity creation process and add some validation so that the entity is always created in the correct state.
Since the supported currencies might change overtime, and the logic of which currencies are supported is a different responsibility than the bank note issuing logic, injecting the ISupportedCurrencyService in the factory method gives us the following benefits:
By the way, the method dependency injection for domain services is suggested in the book: Hands-On Domain-Driven Design with .NET Core
By Alexey Zimarev. Chapter 5 "Implementing the Model" page 120
Pros
The BankNote is always created with a supported Currency, even if the currencies supported change overtime.
Since we are depending on an interface instead of a concrete implementation, we can easily swap and change the implementation without changing the entity.
The service is never stored as an instance variable of the class, so no risk of depending on it more than we need.
Cons
If we keep going this way we might add a lot of dependencies injected into the entity and it will become hard to maintain overtime.
We still are adding a loosely coupled dependency to the entity and hence the entity now needs to know about that interface. We are violating the Single Responsibility Principle, and now you would need to mock the ISupportedCurrencyService to test the factory method.
We can’t instantiate the entity without depending on a service implemented externally from the domain. This can cause serious memory leak and performance issues depending on the scenario.
Another approach
You can avoid all the cons if you call the service before trying to instantiate the entity. Say having a different class for the factory instead of a factory method, and make that separate factory use the ISupportedCurrencyService and only then call the entity constructor.
public class BankNoteFactory
{
private readonly ISupportedCurrencyService _currencyService;
private BankNoteFactory(
ISupportedCurrencyService currencyService)
=> _currencyService = currencyService;
public BankNote FromCurrency(
Currency currency)
{
if(_currencyService.IsAvailable(currency))
return new BanckNote(currency);
// To call the constructor here you would also need
// to change the constructor visibility to internal.
}
}
Using this approach you would end with one extra class and an entity that could be instantiated with unsupported currencies, but with better SRP compliance.

Can you have an interface be dependent on a class?

I'm studying SOLID principles and have a question about dependency management in relation to interfaces.
An example from the book I'm reading (Adaptive Code via C# by Gary McLean Hall) shows a TradeProcessor class that will get the trade data, process it, and store it in the database. The trade data is modeled by a class called TradeRecord. A TradeParser class will handle converting the trade data that is received into a TradeRecord instance(s). The TradeProcessor class only references an ITradeParser interface so that it is not dependent on the TradeParser implementation.
The author has the Parse method (in the ITradeParser interface) return an IEnumerable<TradeRecord> collection that holds the processed trade data. Doesn't that mean that ITradeParser is now dependent on the TradeRecord class?
Shouldn't the author have done something like make an ITradeRecord interface and have Parse return a collection of ITradeRecord instances? Or am I missing something important?
Here's the code (the implementation of TradeRecord is irrelevant so it is omitted):
TradeProcessor.cs
public class TradeProcessor
{
private readonly ITradeParser tradeParser;
public TradeProcessor(ITradeParser tradeParser)
{
this.tradeParser = tradeParser;
}
public void ProcessTrades()
{
IEnumerable<string> tradeData = "Simulated trade data..."
var trades = tradeParser.Parse(tradeData);
// Do something with the parsed data...
}
}
ITradeParser.cs
public interface ITradeParser
{
IEnumerable<TradeRecord> Parse(IEnumerable<string> tradeData);
}
This is a good question that goes into the tradeoff between purity and practicality.
Yes, by pure principal, you can say that ITradeParser.Parse should return a collection of ITraceRecord interfaces. After all, why tie yourself to a specific implementation?
However, you can take this further. Should you accept an IEnumerable<string>? Or should you have some sort of ITextContainer? I32bitNumeric instead of int? This is reductio ad absurdum, of course, but it shows that we always, at some point, reach a point where we're working on something, a concrete object (number, string, TraceRecord, whatever), not an abstraction.
This also brings up the point of why we use interfaces in the first place, which is to define contracts for logic and functionality. An ITradeProcessor is a contract for an unknown implementation that can be replaced or updated. A TradeRecord isn't a contract for implementation, it is the implementation. If it's a DTO object, which it seems to be, there would be no difference between the interface and the implementation, which means there's no real purpose in defining this contract - it's implied in the concrete class.
The author has the Parse method (in the ITradeParser interface) return an IEnumerable collection that holds the processed trade data.
Doesn't that mean that ITradeParser is now dependent on the TradeRecord class?
Yes, ITradeParser is now tightly coupled with TradeRecord. Given the more academic approach of this question, I can see where you are coming from. But what is TradeRecord? A record, by definition, is generally a simple, non-intelligent piece of data (sometimes called POCO, DTO, or Model).
At some point, the potential gain of abstraction is less valuable than the complexities it causes. This approach is pretty common in practice - Models (as I refer to them) are sealed types that flow through the layers of an application. Layers that act upon the models are abstracted to interfaces, so that each layer may be mocked and tested separately.
For example, a client application may have a View, ViewModel, and Repository layer. Each layer knows how to work with the concrete record type. But the ViewModel could be wired up to work with a mocked IRepository, which builds up the concrete types with hardcoded, mocked data. There's no benefit to an abstracted IModel at this point - it just has straight data.

Architecture to avoid creating repositories inside testable methods

Working on an MVVM application. Each ViewModel class has a constructor which accepts a Repository class so that it can be mocked out for unit testing.
The application is designed to be operated across several windows at once. So it contains a number "View" or "Open" style methods which create new ViewModels and place them into new windows. Because these are triggered via the UI, they are often inside existing ViewModels. For instance:
public void ViewQuote(Quote quote)
{
if (quote.CreatedOn == null)
{
quote.CreatedOn = DateTime.Now;
}
NavigationHelper.NewWindow(this, new QuoteViewModel(quote, new Repository()));
}
Now, that flow control statement looks worth testing to ensure that quotes passed with a null CreatedOn date get assigned one. However, my test for this fails because although the parent ViewModel has a mocked Repository, the NewWindow method spins up a new ViewModel with a real-life Repository inside it. This then throws an error when it's used inside the constructor of that class.
There are two obvious options.
One is to pull out the date assignment into a stand-alone function to test. That'll work, but it seems too simplistic for its own function. Plus if I do it all over the application it risks creating too much fragmentation for easy readability.
The other is to somehow change the constructor code for ViewModels to not use the Repository directly. That might be an option here, but it's unlikely to be workable for every possible scenario.
Or is there a third way to design this better so I can pass a mocked Repository into the constructor of my new ViewModel?
Newing up services (or service-like objects such as repositories) is a design smell. And the problems that you're experiencing are the consequence.
In other words, you are lacking a clear and well-defined Composition Root.
Solution: Use proper dependency injection
The only clean solution to this is to inject services through the constructor. Repositories usually have a shorter lifecycle than the application itself, so in this case you would inject a factory that is able to create the repository.
Note that clear dependency trees are good design, but using a DI framework such as Autofac is only one technical solution to implement such a design. You can completely solve your problems and create a clean composition root without using a DI framework.
So although this is probably a lot of work, you should redesign your application to have a clear composition root. Otherwise, you will run into small issues over and over again, especially in the testing area.
You could work around it by adding a public Repository NewRepository property, in your ViewQuote method change it to look like this:
public void ViewQuote(Quote quote)
{
if (quote.CreatedOn == null)
{
quote.CreatedOn = DateTime.Now;
}
if(NewRepository == null) NewRepository = new Respository();
NavigationHelper.NewWindow(this, new QuoteViewModel(quote, NewRepository));
}
Then in your mock, just ensure that the public NewRepository property is assigned to before the test is run on that bit of code.
Not very elegant, but it would require the least amount of changing in my opinion.

Dependency Injection and the Strategy Pattern

There is an enormous amount of discussion on this topic, but everyone seems to miss an obvious answer. I'd like help vetting this "obvious" IOC container solution. The various conversations assume run-time selection of strategies and the use of an IOC container. I will continue with these assumptions.
I also want to add the assumption that it is not a single strategy that must be selected. Rather, I might need to retrieve an object-graph that has several strategies found throughout the nodes of the graph.
I will first quickly outline the two commonly proposed solutions, and then I will present the "obvious" alternative that I'd like to see an IOC container support. I will be using Unity as the example syntax, though my question is not specific to Unity.
Named Bindings
This approach requires that every new strategy has a binding manually added:
Container.RegisterType<IDataAccess, DefaultAccessor>();
Container.RegisterType<IDataAccess, AlphaAccessor>("Alpha");
Container.RegisterType<IDataAccess, BetaAccessor>("Beta");
...and then the correct strategy is explicitly requested:
var strategy = Container.Resolve<IDataAccess>("Alpha");
Pros: Simple, and supported by all IOC Containers
Cons:
Typically binds the caller to the IOC Container, and certainly requires the caller to know something about the strategy (such as the name "Alpha").
Every new strategy must be manually added to the list of bindings.
This approach is not suitable for handling multiple strategies in an object graph. In short, it does not meet requirements.
Abstract Factory
To illustrate this approach, assume the following classes:
public class DataAccessFactory{
public IDataAccess Create(string strategy){
return //insert appropriate creation logic here.
}
public IDataAccess Create(){
return //Choose strategy through ambient context, such as thread-local-storage.
}
}
public class Consumer
{
public Consumer(DataAccessFactory datafactory)
{
//variation #1. Not sufficient to meet requirements.
var myDataStrategy = datafactory.Create("Alpha");
//variation #2. This is sufficient for requirements.
var myDataStrategy = datafactory.Create();
}
}
The IOC Container then has the following binding:
Container.RegisterType<DataAccessFactory>();
Pros:
The IOC Container is hidden from consumers
The "ambient context" is closer to the desired result but...
Cons:
The constructors of each strategy might have different needs. But now the responsibility of constructor injection has been transferred to the abstract factory from the container. In other words, every time a new strategy is added it may be necessary to modify the corresponding abstract factory.
Heavy use of strategies means heavy amounts of creating abstract factories. It would be nice if the IOC container simply gave a little more help.
If this is a multi-threaded application and the "ambient context" is indeed provided by thread-local-storage, then by the time an object is using an injected abstract-factory to create the type it needs, it may be operating on a different thread which no longer has access to the necessary thread-local-storage value.
Type Switching / Dynamic Binding
This is the approach that I want to use instead of the above two approaches. It involves providing a delegate as part of the IOC container binding. Most all IOC Containers already have this ability, but this specific approach has an important subtle difference.
The syntax would be something like this:
Container.RegisterType(typeof(IDataAccess),
new InjectionStrategy((c) =>
{
//Access ambient context (perhaps thread-local-storage) to determine
//the type of the strategy...
Type selectedStrategy = ...;
return selectedStrategy;
})
);
Notice that the InjectionStrategy is not returning an instance of IDataAccess. Instead it is returning a type description that implements IDataAccess. The IOC Container would then perform the usual creation and "build up" of that type, which might include other strategies being selected.
This is in contrast to the standard type-to-delegate binding which, in the case of Unity, is coded like this:
Container.RegisterType(typeof(IDataAccess),
new InjectionFactory((c) =>
{
//Access ambient context (perhaps thread-local-storage) to determine
//the type of the strategy...
IDataAccess instanceOfSelectedStrategy = ...;
return instanceOfSelectedStrategy;
})
);
The above actually comes close to satisfying the overall need, but definitely falls short of the hypothetical Unity InjectionStrategy.
Focusing on the first sample (which used a hypothetical Unity InjectionStrategy):
Pros:
Hides the container
No need either to create endless abstract factories, or have consumers fiddle with them.
No need to manually adjust IOC container bindings whenever a new strategy is available.
Allows the container to retain lifetime management controls.
Supports a pure DI story, which means that a multi-threaded app can create the entire object-graph on a thread with the proper thread-local-storage settings.
Cons:
Because the Type returned by the strategy was not available when the initial IOC container bindings were created, it means there may be a tiny performance hit the first time that type is returned. In other words, the container must on-the-spot reflect the type to discover what constructors it has, so that it knows how to inject it. All subsequent occurrences of that type should be fast, because the container can cache the results it found from the first time. This is hardly a "con" worth mentioning, but I'm trying for full-disclosure.
???
Is there an existing IOC container that can behave this way? Anyone have a Unity custom injection class that achieves this effect?
As far as I can tell, this question is about run-time selection or mapping of one of several candidate Strategies.
There's no reason to rely on a DI Container to do this, as there are at least three ways to do this in a container-agnostic way:
Use a Metadata Role Hint
Use a Role Interface Role Hint
Use a Partial Type Name Role Hint
My personal preference is the Partial Type Name Role Hint.
I have achieved this requirement in many forms over the last couple of years. Firstly let's pull the main points I can see in your post
assume run-time selection of strategies and the use of an IOC container ... add the assumption that it is not a single strategy that must be selected. Rather, I might need to retrieve an object-graph that has several strategies ... [must not] binds the caller to the IOC Container ... Every new strategy must [not need to] be manually added to the list of bindings ... It would be nice if the IOC container simply gave a little more help.
I have used Simple Injector as my container of choice for some time now and one of the drivers for this decision is that it has extensive support for generics. It is through this feature that we will implement your requirements.
I'm a firm believer that the code should speak for itself so I'll jump right in ...
I have defined an extra class ContainerResolvedClass<T> to demonstrate that Simple Injector finds the right implementation(s) and successfully injects them into a constructor. That is the only reason for the class ContainerResolvedClass<T>. (This class exposes the handlers that are injected into it for test purposes via result.Handlers.)
This first test requires that we get one implementation back for the fictional class Type1:
[Test]
public void CompositeHandlerForType1_Resolves_WithAlphaHandler()
{
var container = this.ContainerFactory();
var result = container.GetInstance<ContainerResolvedClass<Type1>>();
var handlers = result.Handlers.Select(x => x.GetType());
Assert.That(handlers.Count(), Is.EqualTo(1));
Assert.That(handlers.Contains(typeof(AlphaHandler<Type1>)), Is.True);
}
This second test requires that we get one implementation back for the fictional class Type2:
[Test]
public void CompositeHandlerForType2_Resolves_WithAlphaHandler()
{
var container = this.ContainerFactory();
var result = container.GetInstance<ContainerResolvedClass<Type2>>();
var handlers = result.Handlers.Select(x => x.GetType());
Assert.That(handlers.Count(), Is.EqualTo(1));
Assert.That(handlers.Contains(typeof(BetaHandler<Type2>)), Is.True);
}
This third test requires that we get two implementations back for the fictional class Type3:
[Test]
public void CompositeHandlerForType3_Resolves_WithAlphaAndBetaHandlers()
{
var container = this.ContainerFactory();
var result = container.GetInstance<ContainerResolvedClass<Type3>>();
var handlers = result.Handlers.Select(x => x.GetType());
Assert.That(handlers.Count(), Is.EqualTo(2));
Assert.That(handlers.Contains(typeof(AlphaHandler<Type3>)), Is.True);
Assert.That(handlers.Contains(typeof(BetaHandler<Type3>)), Is.True);
}
These tests seems to meet your requirements and best of all no containers are harmed in the solution.
The trick is to use a combination of parameter objects and marker interfaces. The parameter objects contain the data for the behaviour (i.e. the IHandler's) and the marker interfaces govern which behaviours act on which parameter objects.
Here are the marker interfaces and parameter objects - you'll note that Type3 is marked with both marker interfaces:
private interface IAlpha { }
private interface IBeta { }
private class Type1 : IAlpha { }
private class Type2 : IBeta { }
private class Type3 : IAlpha, IBeta { }
Here are the behaviours (IHandler<T>'s):
private interface IHandler<T> { }
private class AlphaHandler<TAlpha> : IHandler<TAlpha> where TAlpha : IAlpha { }
private class BetaHandler<TBeta> : IHandler<TBeta> where TBeta : IBeta { }
This is the single method that will find all implementations of an open generic:
public IEnumerable<Type> GetLoadedOpenGenericImplementations(Type type)
{
var types =
from assembly in AppDomain.CurrentDomain.GetAssemblies()
from t in assembly.GetTypes()
where !t.IsAbstract
from i in t.GetInterfaces()
where i.IsGenericType
where i.GetGenericTypeDefinition() == type
select t;
return types;
}
And this is the code that configures the container for our tests:
private Container ContainerFactory()
{
var container = new Container();
var types = this.GetLoadedOpenGenericImplementations(typeof(IHandler<>));
container.RegisterAllOpenGeneric(typeof(IHandler<>), types);
container.RegisterOpenGeneric(
typeof(ContainerResolvedClass<>),
typeof(ContainerResolvedClass<>));
return container;
}
And finally, the test class ContainerResolvedClass<>
private class ContainerResolvedClass<T>
{
public readonly IEnumerable<IHandler<T>> Handlers;
public ContainerResolvedClass(IEnumerable<IHandler<T>> handlers)
{
this.Handlers = handlers;
}
}
I realise this post is a quite long, but I hope it clearly demonstrates a possible solution to your problem ...
This is a late response but maybe it will help others.
I have a pretty simple approach. I simply create a StrategyResolver to not be directly depending on Unity.
public class StrategyResolver : IStrategyResolver
{
private IUnityContainer container;
public StrategyResolver(IUnityContainer unityContainer)
{
this.container = unityContainer;
}
public T Resolve<T>(string namedStrategy)
{
return this.container.Resolve<T>(namedStrategy);
}
}
Usage:
public class SomeClass: ISomeInterface
{
private IStrategyResolver strategyResolver;
public SomeClass(IStrategyResolver stratResolver)
{
this.strategyResolver = stratResolver;
}
public void Process(SomeDto dto)
{
IActionHandler actionHanlder = this.strategyResolver.Resolve<IActionHandler>(dto.SomeProperty);
actionHanlder.Handle(dto);
}
}
Registration:
container.RegisterType<IActionHandler, ActionOne>("One");
container.RegisterType<IActionHandler, ActionTwo>("Two");
container.RegisterType<IStrategyResolver, StrategyResolver>();
container.RegisterType<ISomeInterface, SomeClass>();
Now, the nice thing about this is that I will never have to touch the StrategyResolver ever again when adding new strategies in the future.
It's very simple. Very clean and I kept the dependency on Unity to a strict minimum. The only time I would have touch the StrategyResolver is if I decide to change container technology which is very unlikely to happen.
Hope this helps!
I generally use a combination of your Abstract Factory and Named Bindings options. After trying many different approaches, I find this approach to be a decent balance.
What I do is create a factory that essentially wraps the instance of the container. See the section in Mark's article called Container-based Factory. As he suggests, I make this factory part of the composition root.
To make my code a little cleaner and less "magic string" based, I use an enum to denote the different possible strategies, and use the .ToString() method to register and resolve.
From your Cons of these approaches:
Typically binds the caller to the IOC Container
In this approach, the container is referenced in the factory, which is part of the Composition Root, so this is no longer an issue (in my opinion).
. . . and certainly requires the caller to know something about the strategy (such as the
name "Alpha").
Every new strategy must be manually added to the list
of bindings. This approach is not suitable for handling multiple
strategies in an object graph. In short, it does not meet
requirements.
At some point, code needs to be written to acknowledge the mapping between the structure that provides the implementation (container, provider, factory, etc.) and the code that requires it. I don't think you can get around this unless you want to use something that is purely convention-based.
The constructors of each strategy might have different needs. But now the responsibility of constructor injection has been transferred to the abstract factory from the container. In other words, every time a new strategy is added it may be necessary to modify the corresponding abstract factory.
This approach solves this concern completely.
Heavy use of strategies means heavy amounts of creating abstract factories.[...]
Yes, you will need one abstract factory for each set of strategies.
If this is a multi-threaded application and the "ambient context" is indeed provided by thread-local-storage, then by the time an object is using an injected abstract-factory to create the type it needs, it may be operating on a different thread which no longer has access to the necessary thread-local-storage value.
This will no longer be an issue since TLC will not be used.
I don't feel that there is a perfect solution, but this approach has worked well for me.

Creating API that is fluent

How does one go about create an API that is fluent in nature?
Is this using extension methods primarily?
This article explains it much better than I ever could.
EDIT, can't squeeze this in a comment...
There are two sides to interfaces, the implementation and the usage. There's more work to be done on the creation side, I agree with that, however the main benefits can be found on the usage side of things. Indeed, for me the main advantage of fluent interfaces is a more natural, easier to remember and use and why not, more aesthetically pleasing API. And just maybe, the effort of having to squeeze an API in a fluent form may lead to better thought out API?
As Martin Fowler says in the original article about fluent interfaces:
Probably the most important thing to
notice about this style is that the
intent is to do something along the
lines of an internal
DomainSpecificLanguage. Indeed this is
why we chose the term 'fluent' to
describe it, in many ways the two
terms are synonyms. The API is
primarily designed to be readable and
to flow. The price of this fluency is
more effort, both in thinking and in
the API construction itself. The
simple API of constructor, setter, and
addition methods is much easier to
write. Coming up with a nice fluent
API requires a good bit of thought.
As in most cases API's are created once and used over and over again, the extra effort may be worth it.
And verbose? I'm all for verbosity if it serves the readability of a program.
MrBlah,
Though you can write extension methods to write a fluent interface, a better approach is using the builder pattern. I'm in the same boat as you and I'm trying to figure out a few advanced features of fluent interfaces.
Below you'll see some sample code that I created in another thread
public class Coffee
{
private bool _cream;
private int _ounces;
public static Coffee Make { get { return new Coffee(); } }
public Coffee WithCream()
{
_cream = true;
return this;
}
public Coffee WithOuncesToServe(int ounces)
{
_ounces = ounces;
return this;
}
}
var myMorningCoffee = Coffee.Make.WithCream().WithOuncesToServe(16);
While many people cite Martin Fowler as being a prominent exponent in the fluent API discussion, his early design claims actually evolve around a fluent builder pattern or method chaining. Fluent APIs can be further evolved into actual internal domain-specific languages. An article that explains how a BNF notation of a grammar can be manually transformed into a "fluent API" can be seen here:
http://blog.jooq.org/2012/01/05/the-java-fluent-api-designer-crash-course/
It transforms this grammar:
Into this Java API:
// Initial interface, entry point of the DSL
interface Start {
End singleWord();
End parameterisedWord(String parameter);
Intermediate1 word1();
Intermediate2 word2();
Intermediate3 word3();
}
// Terminating interface, might also contain methods like execute();
interface End {
void end();
}
// Intermediate DSL "step" extending the interface that is returned
// by optionalWord(), to make that method "optional"
interface Intermediate1 extends End {
End optionalWord();
}
// Intermediate DSL "step" providing several choices (similar to Start)
interface Intermediate2 {
End wordChoiceA();
End wordChoiceB();
}
// Intermediate interface returning itself on word3(), in order to allow
// for repetitions. Repetitions can be ended any time because this
// interface extends End
interface Intermediate3 extends End {
Intermediate3 word3();
}
Java and C# being somewhat similar, the example certainly translates to your use-case as well. The above technique has been heavily used in jOOQ, a fluent API / internal domain-specific language modelling the SQL language in Java
This is a very old question, and this answer should probably be a comment rather than an answer, but I think it's a topic worth continuing to talk about, and this response is too long to be a comment.
The original thinking concerning "fluency" seems to have been basically about adding power and flexibility (method chaining, etc) to objects while making code a bit more self-explanatory.
For example
Company a = new Company("Calamaz Holding Corp");
Person p = new Person("Clapper", 113, 24, "Frank");
Company c = new Company(a, 'Floridex', p, 1973);
is less "fluent" than
Company c = new Company().Set
.Name("Floridex");
.Manager(
new Person().Set.FirstName("Frank").LastName("Clapper").Awards(24)
)
.YearFounded(1973)
.ParentCompany(
new Company().Set.Name("Calamaz Holding Corp")
)
;
But to me, the later is not really any more powerful or flexible or self-explanatory than
Company c = new Company(){
Name = "Floridex",
Manager = new Person(){ FirstName="Frank", LastName="Clapper", Awards=24 },
YearFounded = 1973,
ParentCompany = new Company(){ Name="Calamaz Holding Corp." }
};
..in fact I would call this last version easier to create, read and maintain than the previous, and I would say that it requires significantly less baggage behind the scenes, as well. Which to me is important, for (at least) two reasons:
1 - The cost associated with creating and maintaining layers of objects (no matter who does it) is just as real, relevant and important as the cost associated with creating and maintaining the code that consumes them.
2 - Code bloat embedded in layers of objects creates just as many (if not more) problems as code bloat in the code that consumes those objects.
Using the last version means you can add a (potentially useful) property to the Company class simply by adding one, very simple line of code.
That's not to say that I feel there's no place for method chaining. I really like being able to do things like (in JavaScript)
var _this = this;
Ajax.Call({
url: '/service/getproduct',
parameters: {productId: productId},
)
.Done(
function(product){
_this.showProduct(product);
}
)
.Fail(
function(error){
_this.presentError(error);
}
);
..where (in the hypothetical case I'm imagining) Done and Fail were additions to the original Ajax object, and were able to be added without changing any of the original Ajax object code or any of the existing code that made use of the original Ajax object, and without creating one-off things that were exceptions to the general organization of the code.
So I have definitely found value in making a subset of an object's functions return the 'this' object. In fact whenever I have a function that would otherwise return void, I consider having it return this.
But I haven't yet really found significant value in adding "fluent interfaces" (.eg "Set") to an object, although theoretically it seems like there could be a sort of namespace-like code organization that could arise out of the practice of doing that, which might be worthwhile. ("Set" might not be particularly valuable, but "Command", "Query" and "Transfer" might, if it helped organize things and facilitate and minimize the impact of additions and changes.) One of the potential benefits of such a practice, depending on how it was done, might be improvement in a coder's typical level of care and attention to protection levels - the lack of which has certainly caused great volumes grief.
KISS: Keep it simple stupid.
Fluent design is about one aesthetic design principle used throughout the API. Thou your methodology you use in your API can change slightly, but it is generally better to stay consistent.
Even though you may think 'everyone can use this API, because it uses all different types of methodology's'. The truth is the user would start feeling lost because your consistently changing the structure/data structure of the API to a new design principle or naming convention.
If you wish to change halfway through to a different design principle eg.. Converting from error codes to exception handling because some higher commanding power. It would be folly and would normally in tail lots of pain. It is better to stay the course and add functionality that your customers can use and sell than to get them to re-write and re-discover all their problems again.
Following from the above, you can see that there is more at work of writing a Fluent API than meet's the eye. There are psychological, and aesthetic choices to make before beginning to write one and even then the feeling,need, and desire to conform to customers demand's and stay consistent is the hardest of all.
What is a fluent API
Wikipedia defines them here http://en.wikipedia.org/wiki/Fluent_interface
Why Not to use a fluent interface
I would suggest not implementing a traditional fluent interface, as it increases the amount of code you need to write, complicates your code and is just adding unnecessary boilerplate.
Another option, do nothing!
Don't implement anything. Don't provide "easy" constructors for setting properties and don't provide a clever interface to help your client. Allow the client to set the properties however they normally would. In .Net C# or VB this could be as simple as using object initializers.
Car myCar = new Car { Name = "Chevrolet Corvette", Color = Color.Yellow };
So you don't need to create any clever interface in your code, and this is very readable.
If you have very complex Sets of properties which must be set, or set in a certain order, then use a separate configuration object and pass it to the class via a separate property.
CarConfig conf = new CarConfig { Color = Color.Yellow, Fabric = Fabric.Leather };
Car myCar = new Car { Config = conf };
No and yes. The basics are a good interface or interfaces for the types that you want to behave fluently. Libraries with extension methods can extend this behavior and return the interface. Extension methods give others the possibility to extend your fluent API with more methods.
A good fluent design can be hard and takes a rather long trial and error period to totally finetune the basic building blocks. Just a fluent API for configuration or setup is not that hard.
Learning building a fluent API does one by looking at existing APIs. Compare the FluentNHibernate with the fluent .NET APIs or the ICriteria fluent interfaces. Many configuration APIs are also designed "fluently".
With a fluent API:
myCar.SetColor(Color.Blue).SetName("Aston Martin");
Check out this video http://www.viddler.com/explore/dcazzulino/videos/8/
Writting a fluent API it's complicated, that's why I've written Diezel that is a Fluent API generator for Java. It generates the API with interfaces (or course) to:
control the calling flow
catch generic types (like guice one)
It generates also implementations.
It's a maven plugin.
I think the answer depends on the behaviour you want to achieve for your fluent API. For a stepwise initialization the easiest way is, in my opinion, to create a builder class that implements different interfaces used for the different steps. E.g. if you have a class Student with the properties Name, DateOfBirth and Semester the implementation of the builder could look like so:
public class CreateStudent : CreateStudent.IBornOn, CreateStudent.IInSemester
{
private readonly Student student;
private CreateStudent()
{
student = new Student();
}
public static IBornOn WithName(string name)
{
CreateStudent createStudent = new CreateStudent();
createStudent.student.Name = name;
return createStudent;
}
public IInSemester BornOn(DateOnly dateOfBirth)
{
student.DateOfBirth = dateOfBirth;
return this;
}
public Student InSemester(int semester)
{
student.Semester = semester;
return student;
}
public interface IBornOn
{
IInSemester BornOn(DateOnly dateOfBirth);
}
public interface IInSemester
{
Student InSemester(int semester);
}
}
The builder can then be used as follows:
Student student = CreateStudent.WithName("Robert")
.BornOn(new DateOnly(2002, 8, 3)).InSemester(2);
Admittedly, writing an API for more than three properties becomes tedious. For this reasons I have implemented a source generator that can do this work for you: M31.FluentAPI.

Categories

Resources