BDD naming for several methods - c#

BDD naming approach works perfectly when there's one method in a class that you're testing. Let's assume we have a Connector class which has Connect method:
Should_change_status_to_Connected_if_Disconnected
Beautiful, right? But I feel confused when I have to name tests when there're several methods in a class (let's assume we added Disconnect method to our class).
I see two possible solutions. The first one is to add a prefix with a method name like:
Should_change_status_to_Connected_if_Disconnected_when_Connect_was_called
Another approach is to introduce nested test classes for each method you're testing.
public class ConnectorTests
{
public class ConnectTests
{
public void Should_change_status_to_Connected_if_Disconnected()
{
...
}
}
public class DisconnectTests
{
public void Should_change_status_to_Disconnected_if_Connected()
{
...
}
}
}
Honestly both approaches feel a little bit off (may be just because I'm not used to it). What's the recommended way to go?

I've written dosens tests using different naming styles. Essentially, such test methods is hard to read due to long names, they exceed limit of symbols per line, often underscored names of methods go against naming conventions. Difficulties begin when you want to add "And" conditions or preconditions to your BDD scenarios, like "When Connector is initialized Should change status to Connected if Disconnected AND network is available AND argument1 is... AND argument2 is...". So you have to group your test cases in many classes, sub-folders etc. It increases time of development and support.
Alternative way in C# is writing tests like with JavaScript testing frameworks: Jasmine, Jest etc. For unit tests for classes and methods, I'd use Arrange/Act/Assert style, and BDD style for Feature/Story scenarios, but both styles can be used. In C# I use my Heleonix.Testing.NUnit library and write tests in AAA or BDD (GWT) styles:
using NUnit.Framework;
using Heleonix.Testing.NUnit.Aaa;
using static Heleonix.Testing.NUnit.Aaa.AaaSpec;
[ComponentTest(Type = typeof(Connector))]
public static class ConnectorTests
{
[MemberTest(Name = nameof(Connector.Connect))]
public static void Connect()
{
Connector connector = null;
Arrange(() =>
{
connector = new Connector();
});
When("the Connect is called", () =>
{
Act(() =>
{
connector.Connect(options);
});
And("the Connector is disconnected", () =>
{
Arrange(() =>
{
connector.Disconnect();
});
});
Should("change the status to Disconnected", () =>
{
Assert.That(connector.Disconnected, Is.True);
});
});
}
}
For me important is that in few months later I can open such tests and clearly recall what was written there, and don't sit hours to understand what/how it tests.

In my case, first, I try to separate classes depending pre and post conditions, so I can group some behaviors and keep together the related things. For example, In your case, one precondition could be "Disconnected", so, you can prepare the "disconnected environment" using attributes like ClassInitialize, TestInitialize, TestCleanup, ClassCleanup, etc. (here some examples in MSDN)
And please, as the other developers has recommended, don't forget the naming conventions.
Hope this help, greetings.

Since test cases are totally independent of each other you must to use a static class to initialize those values, connections, etc that you will use for your test later. If you want to use individuals values and initiators yo must to declare them in your classes individually. I use for this nunit framework.
And by the way, you are in c#, use the name convention of .net developers...

Related

Faker-cs Package for .NET: Usage Example?

I installed the Faker Port for C# (https://github.com/oriches/faker-cs) in my project but the project site doesn't give examples of usage.
Can someone post some examples of basic mock data generation?
In this example, I'm using a project that uses MVC4, EF Code First and Automatic Migrations. So if you're using the same, your Migrations\Configuration.cs file should be like this:
internal sealed class Configuration : DbMigrationsConfiguration<MyProject.Models.MyProjectContext>
{
public Configuration()
{
AutomaticMigrationsEnabled = true;
}
...
For single elements, the example is trivial:
protected override void Seed(MyProject.Models.MyProjectContext context)
{
context.Customers.AddOrUpdate(
c => c.Name,
new Customer { Name = Faker.Name.FullName() }
);
context.SaveChanges();
...
For a defined number of elements, I liked the idea of using a lambda expression, like Factory Girl (for Ruby on Rails) does. In another question I made (DbMigrations in .NET w/ Code First: How to Repeat Instructions with Expressions) the answer uses Enumberable.Range() to specify the elements amount:
protected override void Seed(MyProject.Models.MyProjectContext context)
{
context.Companies.AddOrUpdate(
p => p.Name,
Enumerable.Range(1, 10).
Select( x => new Company { Name = Faker.Company.Name() }).ToArray()
);
context.SaveChanges();
}
...
There is a scarcity of resources available, yet this article seems to be what one would expect.
Also try using the Assembly/Object Browser to look at available resources (e.g. which Classes, Methods and so on are contained in the library). Furthermore, the library contains NUnit tests, thus a look at the tests' code may also prove beneficial.
As Faker.NET is a port of the Ruby library called ffaker, one may also imply that the code is similar in intentions, therefore one could also use ffaker's unit tests as a small reference.

Can't stub a ReadOnly property twice?

I have the following code. I have a somewhat legitimate reason for stubbing that property twice (See explanation below). It looks like it's only letting me stub it once.
private IStatus _status;
[SetUp()]
public void Setup() {
this._status = MockRepository.GenerateStub<IStatus>();
this._status.Stub(x => x.Connected()).Return(true);
// This next line would usually be in the Setup for a subclass
this._status.Stub(x => x.Connected()).Return(false);
}
[Test()]
public void TestTheTestFramework() {
Assert.IsFalse(this._status.Connected()); // Fails...
}
public interface IStatus {
bool Connected { get; }
}
I tried downloading the most recent build (3.6 build 21), but still have the same issue. Any ideas on why I can't do this? I tried changing the Connected property on IStatus to be a function and the test still failed. I get the same behavior in VB.Net... Bug?
Explanation on the double-stubbing
I'm structuring my tests around inheritance. That way I can do common setup code just once, using injected mocked dependencies to simulate different conditions. I might provide a base/default stubbed value (e.g. yes, we're connected) which I'd want to override in the subclass that tests the behavior of the SUT when the connection is down. I usually end up with code like this.
[TestFixture()]
public class WhenPublishingAMessage {
// Common setup, inject SUT with mocked dependencies, etc...
[Test()]
public void ShouldAlwaysWriteLogMessage {
//Example of test that would pass for any sub-condition
}
[TestFixture()]
public class AndNoConnection : WhenPublishingAMessage {
// Do any additional setup, stub dependencies to simulate no connection
// Run tests for this condition
}
[TestFixture()]
public class AndHaveConnection : WhenPublishingAMessage {
// Do any additional setup and run tests for this condition
}
}
Edit
This post on the Rhino Mocks google group might be helpful. It looks like I might need to call this._status.BackToRecord(); to reset the state, so to speak... also, tacking on .Repeat.Any() to the second stub statement seemed to help as well. I'll have to post more details later.
You can specify .Repeat.Once() on the first result so that it will be used once and then the next one, as explained in this other stack overflow question
To sum everything up, there's a three different answers that are possible:
Tallmaris's answer:
Specify a specific number of times to return on the first stub using .Repeat.Times(n), .Repeat.Once(), .Repeat.Twice(), etc. For example:
this._status.Stub(x => x.Connected()).Return(true).Repeat.Once();
this._status.Stub(x => x.Connected()).Return(false);
This method works pretty well if I know the number of times the stub will get called before I change it's behavior (e.g. it just gets called once in the constructor).
Reset the mocked object
I don't like this method since I'd like to avoid the (at least to me) more cumbersome Expect/Verify Record/Replay type syntax. It was recommending to me in response to a post I made to the Rhino Mocks Google group with the same title as this question.
this._status.Stub(x => x.Connected).Return(true);
this._status.GetMockRepository().BackToRecordAll();
this._status.GetMockRepository().ReplayAll();
this._status.Stub(x => x.Connected).Return(false);
Using the magical Repeat.Any
I found that using .Repeat.Any() on the second stub overrode the first one work... I feel a bit bad adding some extra 'magic' code to make it work, but in the case where you don't know how often to tell the first stub to return, this option will work.
this._status.Stub(x => x.Connected()).Return(true);
this._status.Stub(x => x.Connected()).Return(false).Repeat.Any();
Note: you can't do .Repeat.Any() more than once.

How can I avoid adding getters to facilitate unit testing?

After reading a blog post mentioning how it seems wrong to expose a public getter just to facilitate testing, I couldn't find any concrete examples of better practices.
Suppose we have this simple example:
public class ProductNameList
{
private IList<string> _products;
public void AddProductName(string productName)
{
_products.Add(productName);
}
}
Let's say for object-oriented design reasons, I have no need to publicly expose the list of products. How then can I test whether AddProductName() did its job? (Maybe it added the product twice, or not at all.)
One solution is to expose a read-only version of _products where I can test whether it has only one product name -- the one I passed to AddProductName().
The blog post I mentioned earlier says it's more about interaction (i.e., did the product name get added) rather than state. However, state is exactly what I'm checking. I already know AddProductName() has been called -- I want to test whether the object's state is correct once that method has done its work.
Disclaimer: Although this question is similar to
Balancing Design Principles: Unit Testing, (1) the language is different (C# instead of Java), (2) this question has sample code, and (3) I don't feel the question was adequately answered (i.e., code would have helped demonstrate the concept).
Unit tests should test the public API.
If you have "no need to publicly expose the list of products", then why would you care whether AddProductName did its job? What possible difference would it make if the list is entirely private and never, ever affected anything else?
Find out what affect AddProductName has on the state that can be detected using the API, and test that.
Very similar question here: Domain Model (Exposing Public Properties)
You could make your accessors protected so you can mock or you could use internal so that you can access the property in a test but that IMO would be wrong as you have suggested.
I think sometimes we get so caught up in wanting to make sure that every little thing in our code is tested. Sometime we need to take a step back and ask why are we storing this value, and what is its purpose? Then instead of testing that the value gets set we can then start testing that the behaviour of the component is correct.
EDIT
One thing you can do in your scenario is to have a bastard constructor where you inject an IList and then you test that you have added a product:
public class ProductNameList
{
private IList<string> _products;
internal ProductNameList(IList<string> products)
{
_products = products;
}
...
}
You would then test it like this:
[Test]
public void FooTest()
{
var productList = new List<string>();
var productNameList = new ProductNameList(productList);
productNameList.AddProductName("Foo");
Assert.IsTrue(productList[0] == "Foo");
}
You will need to remember to make internals visable to your test assembly.
Make _products protected instead of private. In your mock, you can add an accessor.
To test if AddProductName() did it's job, instead of using a public getter for _ProductNames, make a call to GetProductNames() - or the equivalent that's defined in your API. Such a function may not necessarily be in the same class.
Now, if your API does not expose any way to get information about product names, then AddProductName() has no observable side effects (In which case, it is a meaningless function).
If AddProductName() does have side effects, but they are indirect - say, a method in ProductList that writes the list of product names to a file, then ProductList should be split into two classes - one that manages the list, and the other that calls it's Add and Get API, and performs side effects.

Unit testing, mocking - simple case: Service - Repository

Consider a following chunk of service:
public class ProductService : IProductService {
private IProductRepository _productRepository;
// Some initlization stuff
public Product GetProduct(int id) {
try {
return _productRepository.GetProduct(id);
} catch (Exception e) {
// log, wrap then throw
}
}
}
Let's consider a simple unit test:
[Test]
public void GetProduct_return_the_same_product_as_getProduct_on_productRepository() {
var product = EntityGenerator.Product();
_productRepositoryMock.Setup(pr => pr.GetProduct(product.Id)).Returns(product);
Product returnedProduct = _productService.GetProduct(product.Id);
Assert.AreEqual(product, returnedProduct);
_productRepositoryMock.VerifyAll();
}
At first it seems that this test is ok. But let's change our service method a little bit:
public Product GetProduct(int id) {
try {
var product = _productRepository.GetProduct(id);
product.Owner = "totallyDifferentOwner";
return product;
} catch (Exception e) {
// log, wrap then throw
}
}
How to rewrite a given test that it'd pass with the first service method and fail with a second one?
How do you handle this kind of simple scenarios?
HINT 1: A given test is bad coz product and returnedProduct is actually the same reference.
HINT 2: Implementing equality members (object.equals) is not the solution.
HINT 3: As for now, I create a clone of the Product instance (expectedProduct) with AutoMapper - but I don't like this solution.
HINT 4: I'm not testing that the SUT does NOT do sth. I'm trying to test that SUT DOES return the same object as it is returned from repository.
Personally, I wouldn't care about this. The test should make sure that the code is doing what you intend. It's very hard to test what code is not doing, I wouldn't bother in this case.
The test actually should just look like this:
[Test]
public void GetProduct_GetsProductFromRepository()
{
var product = EntityGenerator.Product();
_productRepositoryMock
.Setup(pr => pr.GetProduct(product.Id))
.Returns(product);
Product returnedProduct = _productService.GetProduct(product.Id);
Assert.AreSame(product, returnedProduct);
}
I mean, it's one line of code you are testing.
Why don't you mock the product as well as the productRepository?
If you mock the product using a strict mock, you will get a failure when the repository touches your product.
If this is a completely ridiculous idea, can you please explain why? Honestly, I'd like to learn.
One way of thinking of unit tests is as coded specifications. When you use the EntityGenerator to produce instances both for the Test and for the actual service, your test can be seen to express the requirement
The Service uses the EntityGenerator to produce Product instances.
This is what your test verifies. It's underspecified because it doesn't mention if modifications are allowed or not. If we say
The Service uses the EntityGenerator to produce Product instances, which cannot be modified.
Then we get a hint as to the test changes needed to capture the error:
var product = EntityGenerator.Product();
// [ Change ]
var originalOwner = product.Owner;
// assuming owner is an immutable value object, like String
// [...] - record other properties as well.
Product returnedProduct = _productService.GetProduct(product.Id);
Assert.AreEqual(product, returnedProduct);
// [ Change ] verify the product is equivalent to the original spec
Assert.AreEqual(originalOwner, returnedProduct.Owner);
// [...] - test other properties as well
(The change is that we retrieve the owner from the freshly created Product and check the owner from the Product returned from the service.)
This embodies the fact that the Owner and other product properties must equal the the original value from the generator. This may seem like I'm stating the obvious, since the code is pretty trivial, but it runs quite deep if you think in terms of requirement specifications.
I often "test my tests" by stipulating "if I change this line of code, tweak a critical constant or two, or inject a few code burps (e.g. changing != to ==), which test will capture the error?" Doing it for real finds if there is a test that captures the problem. Sometimes not, in which case it's time to look at the requirements implicit in the tests, and see how we can tighten them up. In projects with no real requirements capture/analysis this can be a useful tool to toughen up tests so they fail when unexpected changes occur.
Of course, you have to be pragmatic. You can't reasonably expect to handle all changes - some will simply be absurd and the program will crash. But logical changes like the Owner change are good candidates for test strengthening.
By dragging talk of requirements into a simple coding fix, some may think I've gone off the deep end, but thorough requirements help produce thorough tests, and if you have no requirements, then you need to work doubly hard to make sure your tests are thorough, since you're implicitly doing requirements capture as you write the tests.
EDIT: I'm answering this from within the contraints set in the question. Given a free choice, I would suggest not using the EntityGenerator to create Product test instances, and instead create them "by hand" and use an equality comparison. Or more direct, compare the fields of the returned Product to specific (hard-coded) values in the test, again, without using the EntityGenerator in the test.
Q1: Don't make changes to code then write a test. Write a test first for the expected behavior. Then you can do whatever you want to the SUT.
Q2: You don't make the changes in your Product Gateway to change the owner of the product. You make the change in your model.
But if you insist, then listen to your tests. They are telling you that you have the possibility for products to be pulled from the gateway that have the incorrect owners. Oops, Looks like a business rule. Should be tested for in the model.
Also your using a mock. Why are you testing an implementation detail? The gateway only cares that the _productRepository.GetProduct(id) returns a product. Not what the product is.
If you test in this manner you will be creating fragile tests. What if product changes further. Now you have failing tests all over the place.
Your consumers of product (MODEL) are the only ones that care about the implementation of Product.
So your gateway test should look like this:
[Test]
public void GetProduct_return_the_same_product_as_getProduct_on_productRepository() {
var product = EntityGenerator.Product();
_productRepositoryMock.Setup(pr => pr.GetProduct(product.Id)).Returns(product);
_productService.GetProduct(product.Id);
_productRepositoryMock.VerifyAll();
}
Don't put business logic where it doesn't belong! And it's corollary is don't test for business logic where there should be none.
If you really want to guarantee that the service method doesn't change the attributes of your products, you have two options:
Define the expected product attributes in your test and assert that the resulting product matches these values. (This appears to be what you're doing now by cloning the object.)
Mock the product and specify expectations to verify that the service method does not change its attributes.
This is how I'd do the latter with NMock:
// If you're not a purist, go ahead and verify all the attributes in a single
// test - Get_Product_Does_Not_Modify_The_Product_Returned_By_The_Repository
[Test]
public Get_Product_Does_Not_Modify_Owner() {
Product mockProduct = mockery.NewMock<Product>(MockStyle.Transparent);
Stub.On(_productRepositoryMock)
.Method("GetProduct")
.Will(Return.Value(mockProduct);
Expect.Never
.On(mockProduct)
.SetProperty("Owner");
_productService.GetProduct(0);
mockery.VerifyAllExpectationsHaveBeenMet();
}
My previous answer stands, though it assumes the members of the Product class that you care about are public and virtual. This is not likely if the class is a POCO / DTO.
What you're looking for might be rephrased as a way to do comparison of the values (not instance) of the object.
One way to compare to see if they match when serialized. I did this recently for some code... Was replacing a long parameter list with a parameterized object. The code is crufty, I don't want to refactor it though as its going away soon anyhow. So I just do this serialization comparison as a quick way to see if they have the same value.
I wrote some utility functions... Assert2.IsSameValue(expected,actual) which functions like NUnit's Assert.AreEqual(), except it serializes via JSON before comparing. Likewise, It2.IsSameSerialized() can be used to describe parameters passed to mocked calls in a manner similar to Moq.It.Is().
public class Assert2
{
public static void IsSameValue(object expectedValue, object actualValue) {
JavaScriptSerializer serializer = new JavaScriptSerializer();
var expectedJSON = serializer.Serialize(expectedValue);
var actualJSON = serializer.Serialize(actualValue);
Assert.AreEqual(expectedJSON, actualJSON);
}
}
public static class It2
{
public static T IsSameSerialized<T>(T expectedRecord) {
JavaScriptSerializer serializer = new JavaScriptSerializer();
string expectedJSON = serializer.Serialize(expectedRecord);
return Match<T>.Create(delegate(T actual) {
string actualJSON = serializer.Serialize(actual);
return expectedJSON == actualJSON;
});
}
}
Well, one way is to pass around a mock of product rather than the actual product. Verify nothing to affect the product by making it strict. (I assume you are using Moq, it looks like you are)
[Test]
public void GetProduct_return_the_same_product_as_getProduct_on_productRepository() {
var product = new Mock<EntityGenerator.Product>(MockBehavior.Strict);
_productRepositoryMock.Setup(pr => pr.GetProduct(product.Id)).Returns(product);
Product returnedProduct = _productService.GetProduct(product.Id);
Assert.AreEqual(product, returnedProduct);
_productRepositoryMock.VerifyAll();
product.VerifyAll();
}
That said, I'm not sure you should be doing this. The test is doing to much, and might indicate there is another requirement somewhere. Find that requirement and create a second test. It might be that you just want to stop yourself from doing something stupid. I don't think that scales, because there are so many stupid things you can do. Trying to test each would take too long.
I'm not sure, if the unit test should care about "what given method does not". There are zillion steps which are possible. In strict the test "GetProduct(id) return the same product as getProduct(id) on productRepository" is correct with or without the line product.Owner = "totallyDifferentOwner".
However you can create a test (if is required) "GetProduct(id) return product with same content as getProduct(id) on productRepository" where you can create a (propably deep) clone of one product instance and then you should compare contents of the two objects (so no object.Equals or object.ReferenceEquals).
The unit tests are not guarantee for 100% bug free and correct behaviour.
You can return an interface to product instead of a concrete Product.
Such as
public IProduct GetProduct(int id)
{
return _productRepository.GetProduct(id);
}
And then verify the Owner property was not set:
Dep<IProduct>().AssertWasNotCalled(p => p.Owner = Arg.Is.Anything);
If you care about all the properties and or methods, then there is probably a pre-existing way with Rhino. Otherwise you can make an extension method that probably uses reflection such as:
Dep<IProduct>().AssertNoPropertyOrMethodWasCalled()
Our behaviour specifications are like so:
[Specification]
public class When_product_service_has_get_product_called_with_any_id
: ProductServiceSpecification
{
private int _productId;
private IProduct _actualProduct;
[It]
public void Should_return_the_expected_product()
{
this._actualProduct.Should().Be.EqualTo(Dep<IProduct>());
}
[It]
public void Should_not_have_the_product_modified()
{
Dep<IProduct>().AssertWasNotCalled(p => p.Owner = Arg<string>.Is.Anything);
// or write your own extension method:
// Dep<IProduct>().AssertNoPropertyOrMethodWasCalled();
}
public override void GivenThat()
{
var randomGenerator = new RandomGenerator();
this._productId = randomGenerator.Generate<int>();
Stub<IProductRepository, IProduct>(r => r.GetProduct(this._productId));
}
public override void WhenIRun()
{
this._actualProduct = Sut.GetProduct(this._productId);
}
}
Enjoy.
If all consumers of ProductService.GetProduct() expect the same result as if they had asked it from the ProductRepository, why don't they just call ProductRepository.GetProduct() itself ?
It seems you have an unwanted Middle Man here.
There's not much value added to ProductService.GetProduct(). Dump it and have the client objects call ProductRepository.GetProduct() directly. Put the error handling and logging into ProductRepository.GetProduct() or the consumer code (possibly via AOP).
No more Middle Man, no more discrepancy problem, no more need to test for that discrepancy.
Let me state the problem as I see it.
You have a method and a test method. The test method validates the original method.
You change the system under test by altering the data. What you want to see is that the same unit test fails.
So in effect you're creating a test that verifies that the data in the data source matches the data in your fetched object AFTER the service layer returns it. That probably falls under the class of "integration test."
You don't have many good options in this case. Ultimately, you want to know that every property is the same as some passed-in property value. So you're forced to test each property independently. You could do this with reflection, but that won't work well for nested collections.
I think the real question is: why test your service model for the correctness of your data layer, and why write code in your service model just to break the test? Are you concerned that you or other users might set objects to invalid states in your service layer? In that case you should change your contract so that the Product.Owner is readonly.
You'd be better off writing a test against your data layer to ensure that it fetches data correctly, then use unit tests to check the business logic in your service layer. If you're interested in more details about this approach reply in the comments.
Having look on all 4 hints provided it seems that you want to make an object immutable at runtime. C# language does no support that. It is possible only with refactoring the Product class itself. For refactoring you can take IReadonlyProduct approach and protect all setters from being called. This however still allows modification of elements of containers like List<> being returned by getters. ReadOnly collection won't help either. Only WPF lets you change immutability at runtime with Freezable class.
So I see the only proper way to make sure objects have same contents is by comparing them. Probably the easiest way would be to add [Serializable] attribute to all involved entities and do the serialization-with-comparison as suggested by Frank Schwieterman.

Creating API that is fluent

How does one go about create an API that is fluent in nature?
Is this using extension methods primarily?
This article explains it much better than I ever could.
EDIT, can't squeeze this in a comment...
There are two sides to interfaces, the implementation and the usage. There's more work to be done on the creation side, I agree with that, however the main benefits can be found on the usage side of things. Indeed, for me the main advantage of fluent interfaces is a more natural, easier to remember and use and why not, more aesthetically pleasing API. And just maybe, the effort of having to squeeze an API in a fluent form may lead to better thought out API?
As Martin Fowler says in the original article about fluent interfaces:
Probably the most important thing to
notice about this style is that the
intent is to do something along the
lines of an internal
DomainSpecificLanguage. Indeed this is
why we chose the term 'fluent' to
describe it, in many ways the two
terms are synonyms. The API is
primarily designed to be readable and
to flow. The price of this fluency is
more effort, both in thinking and in
the API construction itself. The
simple API of constructor, setter, and
addition methods is much easier to
write. Coming up with a nice fluent
API requires a good bit of thought.
As in most cases API's are created once and used over and over again, the extra effort may be worth it.
And verbose? I'm all for verbosity if it serves the readability of a program.
MrBlah,
Though you can write extension methods to write a fluent interface, a better approach is using the builder pattern. I'm in the same boat as you and I'm trying to figure out a few advanced features of fluent interfaces.
Below you'll see some sample code that I created in another thread
public class Coffee
{
private bool _cream;
private int _ounces;
public static Coffee Make { get { return new Coffee(); } }
public Coffee WithCream()
{
_cream = true;
return this;
}
public Coffee WithOuncesToServe(int ounces)
{
_ounces = ounces;
return this;
}
}
var myMorningCoffee = Coffee.Make.WithCream().WithOuncesToServe(16);
While many people cite Martin Fowler as being a prominent exponent in the fluent API discussion, his early design claims actually evolve around a fluent builder pattern or method chaining. Fluent APIs can be further evolved into actual internal domain-specific languages. An article that explains how a BNF notation of a grammar can be manually transformed into a "fluent API" can be seen here:
http://blog.jooq.org/2012/01/05/the-java-fluent-api-designer-crash-course/
It transforms this grammar:
Into this Java API:
// Initial interface, entry point of the DSL
interface Start {
End singleWord();
End parameterisedWord(String parameter);
Intermediate1 word1();
Intermediate2 word2();
Intermediate3 word3();
}
// Terminating interface, might also contain methods like execute();
interface End {
void end();
}
// Intermediate DSL "step" extending the interface that is returned
// by optionalWord(), to make that method "optional"
interface Intermediate1 extends End {
End optionalWord();
}
// Intermediate DSL "step" providing several choices (similar to Start)
interface Intermediate2 {
End wordChoiceA();
End wordChoiceB();
}
// Intermediate interface returning itself on word3(), in order to allow
// for repetitions. Repetitions can be ended any time because this
// interface extends End
interface Intermediate3 extends End {
Intermediate3 word3();
}
Java and C# being somewhat similar, the example certainly translates to your use-case as well. The above technique has been heavily used in jOOQ, a fluent API / internal domain-specific language modelling the SQL language in Java
This is a very old question, and this answer should probably be a comment rather than an answer, but I think it's a topic worth continuing to talk about, and this response is too long to be a comment.
The original thinking concerning "fluency" seems to have been basically about adding power and flexibility (method chaining, etc) to objects while making code a bit more self-explanatory.
For example
Company a = new Company("Calamaz Holding Corp");
Person p = new Person("Clapper", 113, 24, "Frank");
Company c = new Company(a, 'Floridex', p, 1973);
is less "fluent" than
Company c = new Company().Set
.Name("Floridex");
.Manager(
new Person().Set.FirstName("Frank").LastName("Clapper").Awards(24)
)
.YearFounded(1973)
.ParentCompany(
new Company().Set.Name("Calamaz Holding Corp")
)
;
But to me, the later is not really any more powerful or flexible or self-explanatory than
Company c = new Company(){
Name = "Floridex",
Manager = new Person(){ FirstName="Frank", LastName="Clapper", Awards=24 },
YearFounded = 1973,
ParentCompany = new Company(){ Name="Calamaz Holding Corp." }
};
..in fact I would call this last version easier to create, read and maintain than the previous, and I would say that it requires significantly less baggage behind the scenes, as well. Which to me is important, for (at least) two reasons:
1 - The cost associated with creating and maintaining layers of objects (no matter who does it) is just as real, relevant and important as the cost associated with creating and maintaining the code that consumes them.
2 - Code bloat embedded in layers of objects creates just as many (if not more) problems as code bloat in the code that consumes those objects.
Using the last version means you can add a (potentially useful) property to the Company class simply by adding one, very simple line of code.
That's not to say that I feel there's no place for method chaining. I really like being able to do things like (in JavaScript)
var _this = this;
Ajax.Call({
url: '/service/getproduct',
parameters: {productId: productId},
)
.Done(
function(product){
_this.showProduct(product);
}
)
.Fail(
function(error){
_this.presentError(error);
}
);
..where (in the hypothetical case I'm imagining) Done and Fail were additions to the original Ajax object, and were able to be added without changing any of the original Ajax object code or any of the existing code that made use of the original Ajax object, and without creating one-off things that were exceptions to the general organization of the code.
So I have definitely found value in making a subset of an object's functions return the 'this' object. In fact whenever I have a function that would otherwise return void, I consider having it return this.
But I haven't yet really found significant value in adding "fluent interfaces" (.eg "Set") to an object, although theoretically it seems like there could be a sort of namespace-like code organization that could arise out of the practice of doing that, which might be worthwhile. ("Set" might not be particularly valuable, but "Command", "Query" and "Transfer" might, if it helped organize things and facilitate and minimize the impact of additions and changes.) One of the potential benefits of such a practice, depending on how it was done, might be improvement in a coder's typical level of care and attention to protection levels - the lack of which has certainly caused great volumes grief.
KISS: Keep it simple stupid.
Fluent design is about one aesthetic design principle used throughout the API. Thou your methodology you use in your API can change slightly, but it is generally better to stay consistent.
Even though you may think 'everyone can use this API, because it uses all different types of methodology's'. The truth is the user would start feeling lost because your consistently changing the structure/data structure of the API to a new design principle or naming convention.
If you wish to change halfway through to a different design principle eg.. Converting from error codes to exception handling because some higher commanding power. It would be folly and would normally in tail lots of pain. It is better to stay the course and add functionality that your customers can use and sell than to get them to re-write and re-discover all their problems again.
Following from the above, you can see that there is more at work of writing a Fluent API than meet's the eye. There are psychological, and aesthetic choices to make before beginning to write one and even then the feeling,need, and desire to conform to customers demand's and stay consistent is the hardest of all.
What is a fluent API
Wikipedia defines them here http://en.wikipedia.org/wiki/Fluent_interface
Why Not to use a fluent interface
I would suggest not implementing a traditional fluent interface, as it increases the amount of code you need to write, complicates your code and is just adding unnecessary boilerplate.
Another option, do nothing!
Don't implement anything. Don't provide "easy" constructors for setting properties and don't provide a clever interface to help your client. Allow the client to set the properties however they normally would. In .Net C# or VB this could be as simple as using object initializers.
Car myCar = new Car { Name = "Chevrolet Corvette", Color = Color.Yellow };
So you don't need to create any clever interface in your code, and this is very readable.
If you have very complex Sets of properties which must be set, or set in a certain order, then use a separate configuration object and pass it to the class via a separate property.
CarConfig conf = new CarConfig { Color = Color.Yellow, Fabric = Fabric.Leather };
Car myCar = new Car { Config = conf };
No and yes. The basics are a good interface or interfaces for the types that you want to behave fluently. Libraries with extension methods can extend this behavior and return the interface. Extension methods give others the possibility to extend your fluent API with more methods.
A good fluent design can be hard and takes a rather long trial and error period to totally finetune the basic building blocks. Just a fluent API for configuration or setup is not that hard.
Learning building a fluent API does one by looking at existing APIs. Compare the FluentNHibernate with the fluent .NET APIs or the ICriteria fluent interfaces. Many configuration APIs are also designed "fluently".
With a fluent API:
myCar.SetColor(Color.Blue).SetName("Aston Martin");
Check out this video http://www.viddler.com/explore/dcazzulino/videos/8/
Writting a fluent API it's complicated, that's why I've written Diezel that is a Fluent API generator for Java. It generates the API with interfaces (or course) to:
control the calling flow
catch generic types (like guice one)
It generates also implementations.
It's a maven plugin.
I think the answer depends on the behaviour you want to achieve for your fluent API. For a stepwise initialization the easiest way is, in my opinion, to create a builder class that implements different interfaces used for the different steps. E.g. if you have a class Student with the properties Name, DateOfBirth and Semester the implementation of the builder could look like so:
public class CreateStudent : CreateStudent.IBornOn, CreateStudent.IInSemester
{
private readonly Student student;
private CreateStudent()
{
student = new Student();
}
public static IBornOn WithName(string name)
{
CreateStudent createStudent = new CreateStudent();
createStudent.student.Name = name;
return createStudent;
}
public IInSemester BornOn(DateOnly dateOfBirth)
{
student.DateOfBirth = dateOfBirth;
return this;
}
public Student InSemester(int semester)
{
student.Semester = semester;
return student;
}
public interface IBornOn
{
IInSemester BornOn(DateOnly dateOfBirth);
}
public interface IInSemester
{
Student InSemester(int semester);
}
}
The builder can then be used as follows:
Student student = CreateStudent.WithName("Robert")
.BornOn(new DateOnly(2002, 8, 3)).InSemester(2);
Admittedly, writing an API for more than three properties becomes tedious. For this reasons I have implemented a source generator that can do this work for you: M31.FluentAPI.

Categories

Resources