When working with Domain Objects, how do you typically unit test a method that calls another method in the object? For example:
public class Invoice
{
public IList<InvoiceLine> InvoiceLines;
public string Company;
public bool IsDiscounted;
public DateTime InvoiceDate;
//...
public GetTotalAmt();
public GetExtendedTotalAmt();
public decimal GetTotalAmt()
{
decimal total;
foreach (InvoiceLine il in InvoiceLines)
{
total += il.Qty * il.Price;
}
return total;
}
public decimal GetExtendedTotalAmt()
{
decimal discount;
if (IsDiscounted)
discount = .05M;
return GetTotalAmt() * discount;
}
}
Unit testing GetTotalAmt() is easy, but with GetExtendedTotalAmt() I'd have to use stub/mock InvoiceLine objects to make it work, when all I really want to do is test that a discount is applied if the IsDiscounted flag is true.
How do other people handle this? I don't think it makes sense to split up the domain object since these methods are both considered part of the core Invoice functionality (and splitting it would likely cause developers to call the wrong method more often).
Thanks!
You could make the GetTotalAmt method virtual and then:
var sut = new MockRepository().PartialMock<Invoice>();
sut.Expect(x => x.GetTotalAmt()).Return(10);
sut.Replay();
var result = sut.GetExtendedTotalAmt();
I would build up a situation which is as simple as possible: only one InvoiceLine with a Quantity and Price of 1.
Something like this:
invoice.Add(new InvoiceLine(new Article("blah", 1M), 1));
Assert.AreEqual(0.95M, invoice.GetExtendedTotalAmt());
When you find that this stuff gets quite complicated, finding errors gets hard etc, then it is a sign that you should split the class (making the calculations on the invoice a strategy or something similar). But as long as it is as simple as you piece of code here, I wouldn't worry about it.
Related
I'm currently having a model class which contains several properties. A simplified model could look like this:
public class SomeClass
{
public DateTime ValidFrom { get; set; }
public DateTime ExpirationDate { get; set; }
}
Now I'm implementing some unit tests by using NUnit and use AutoFixture to create some random data:
[Test]
public void SomeTest()
{
var fixture = new Fixture();
var someRandom = fixture.Create<SomeClass>();
}
This works perfect so far. But there is the requirement that the date of ValidFrom is always before ExpirationDate. I have to ensure this since I'm implementing some positive tests.
So is there an easy way to implement this by using AutoFixture? I know I could create a fix date and add a random date interval to solve this, but it would be great if AutoFixture could handle this requirement itself.
I haven't got a lot of experience with AutoFixture, but I know I can get an ICustomizationComposer by calling the Build method:
var fixture = new Fixture();
var someRandom = fixture.Build<SomeClass>()
.With(some => /*some magic like some.ValidFrom < some.ExpirationDate here...*/ )
.Create();
Maybe this is the right way to achieve this?
Thanks in advance for any help.
It may be tempting to ask the question of how do I make AutoFixture adapt to my design?, but often, a more interesting question could be: how do I make my design more robust?
You can keep the design and 'fix' AutoFixture, but I don't think it's a particularly good idea.
Before I tell you how to do that, depending on your requirements, perhaps all you need to do is the following.
Explicit assignment
Why not simply assign a valid value to ExpirationDate, like this?
var sc = fixture.Create<SomeClass>();
sc.ExpirationDate = sc.ValidFrom + fixture.Create<TimeSpan>();
// Perform test here...
If you're using AutoFixture.Xunit, it can be even simpler:
[Theory, AutoData]
public void ExplicitPostCreationFix_xunit(
SomeClass sc,
TimeSpan duration)
{
sc.ExpirationDate = sc.ValidFrom + duration;
// Perform test here...
}
This is fairly robust, because even though AutoFixture (IIRC) creates random TimeSpan values, they'll stay in the positive range unless you've done something to your fixture to change its behaviour.
This approach would be the simplest way to address your question if you need to test SomeClass itself. On the other hand, it's not very practical if you need SomeClass as input values in myriads of other tests.
In such cases, it can be tempting to fix AutoFixture, which is also possible:
Changing AutoFixture's behaviour
Now that you've seen how to address the problem as a one-off solution, you can tell AutoFixture about it as a general change of the way SomeClass is generated:
fixture.Customize<SomeClass>(c => c
.Without(x => x.ValidFrom)
.Without(x => x.ExpirationDate)
.Do(x =>
{
x.ValidFrom = fixture.Create<DateTime>();
x.ExpirationDate =
x.ValidFrom + fixture.Create<TimeSpan>();
}));
// All sorts of other things can happen in between, and the
// statements above and below can happen in separate classes, as
// long as the fixture instance is the same...
var sc = fixture.Create<SomeClass>();
You can also package the above call to Customize in an ICustomization implementation, for further reuse. This would also enable you to use a customized Fixture instance with AutoFixture.Xunit.
Change the design of the SUT
While the above solutions describe how to change the behaviour of AutoFixture, AutoFixture was originally written as a TDD tool, and the main point of TDD is to provided feedback about the System Under Test (SUT). AutoFixture tends to amplify that sort of feedback, which is also the case here.
Consider the design of SomeClass. Nothing prevents a client from doing something like this:
var sc = new SomeClass
{
ValidFrom = new DateTime(2015, 2, 20),
ExpirationDate = new DateTime(1900, 1, 1)
};
This compiles and runs without errors, but is probably not what you want. Thus, AutoFixture is actually not doing anything wrong; SomeClass isn't properly protecting its invariants.
This is a common design mistake, where developers tend to put too much trust into the semantic information of the members' names. The thinking seems to be that no-one in their right mind would set ExpirationDate to a value before ValidFrom! The problem with that sort of argument is that it assumes that all developers will always be assigning these values in pairs.
However, clients may also get a SomeClass instance passed to them, and want to update one of the values, e.g.:
sc.ExpirationDate = new DateTime(2015, 1, 31);
Is this valid? How can you tell?
The client could look at sc.ValidFrom, but why should it? The whole purpose of encapsulation is to relieve clients of such burdens.
Instead, you should consider changing the design SomeClass. The smallest design change I can think of is something like this:
public class SomeClass
{
public DateTime ValidFrom { get; set; }
public TimeSpan Duration { get; set; }
public DateTime ExpirationDate
{
get { return this.ValidFrom + this.Duration; }
}
}
This turns ExpirationDate into a read-only, calculated property. With this change, AutoFixture just works out of the box:
var sc = fixture.Create<SomeClass>();
// Perform test here...
You can also use it with AutoFixture.Xunit:
[Theory, AutoData]
public void ItJustWorksWithAutoFixture_xunit(SomeClass sc)
{
// Perform test here...
}
This is still a little brittle, because although by default, AutoFixture creates positive TimeSpan values, it's possible to change that behaviour as well.
Furthermore, the design actually allows clients to assign negative TimeSpan values to the Duration property:
sc.Duration = TimeSpan.FromHours(-1);
Whether or not this should be allowed is up to the Domain Model. Once you begin to consider this possibility, it may actually turn out that defining time periods that move backwards in time is valid in the domain...
Design according to Postel's Law
If the problem domain is one where going back in time isn't allowed, you could consider adding a Guard Clause to the Duration property, rejecting negative time spans.
However, personally, I often find that I arrive at a better API design when I take Postel's Law seriously. In this case, why not change the design so that SomeClass always uses the absolute TimeSpan instead of the signed TimeSpan?
In that case, I'd prefer an immutable object that doesn't enforce the roles of two DateTime instances until it knows their values:
public class SomeClass
{
private readonly DateTime validFrom;
private readonly DateTime expirationDate;
public SomeClass(DateTime x, DateTime y)
{
if (x < y)
{
this.validFrom = x;
this.expirationDate = y;
}
else
{
this.validFrom = y;
this.expirationDate = x;
}
}
public DateTime ValidFrom
{
get { return this.validFrom; }
}
public DateTime ExpirationDate
{
get { return this.expirationDate; }
}
}
Like the previous redesign, this just works out of the box with AutoFixture:
var sc = fixture.Create<SomeClass>();
// Perform test here...
The situation is the same with AutoFixture.Xunit, but now no clients can misconfigure it.
Whether or not you find such a design appropriate is up to you, but I hope at least it's food for thought.
This is a kind of "extended comment" in reference to Mark's answer, trying to build on his Postel's Law solution. The parameter swapping in the constructor felt uneasy for me, so I've made the date swapping behaviour explicit in a Period class.
Using C#6 syntax for brevity:
public class Period
{
public DateTime Start { get; }
public DateTime End { get; }
public Period(DateTime start, DateTime end)
{
if (start > end) throw new ArgumentException("start should be before end");
Start = start;
End = end;
}
public static Period CreateSpanningDates(DateTime x, DateTime y, params DateTime[] others)
{
var all = others.Concat(new[] { x, y });
var start = all.Min();
var end = all.Max();
return new Duration(start, end);
}
}
public class SomeClass
{
public DateTime ValidFrom { get; }
public DateTime ExpirationDate { get; }
public SomeClass(Period period)
{
ValidFrom = period.Start;
ExpirationDate = period.End;
}
}
You would then need to customize your fixture for Period to use the static constructor:
fixture.Customize<Period>(f =>
f.FromFactory<DateTime, DateTime>((x, y) => Period.CreateSpanningDates(x, y)));
I think the main benefit of this solution is that it extracts the time-ordering requirement into its own class (SRP) and leaves your business logic to be expressed in terms of an already-agreed contract, apparent from the constructor signature.
Since SomeClass is mutable, here's one way of doing it:
[Fact]
public void UsingGeneratorOfDateTime()
{
var fixture = new Fixture();
var generator = fixture.Create<Generator<DateTime>>();
var sut = fixture.Create<SomeClass>();
var seed = fixture.Create<int>();
sut.ExpirationDate =
generator.First().AddYears(seed);
sut.ValidFrom =
generator.TakeWhile(dt => dt < sut.ExpirationDate).First();
Assert.True(sut.ValidFrom < sut.ExpirationDate);
}
FWIW, using AutoFixture with xUnit.net data theories, the above test can be written as:
[Theory, AutoData]
public void UsingGeneratorOfDateTimeDeclaratively(
Generator<DateTime> generator,
SomeClass sut,
int seed)
{
sut.ExpirationDate =
generator.First().AddYears(seed);
sut.ValidFrom =
generator.TakeWhile(dt => dt < sut.ExpirationDate).First();
Assert.True(sut.ValidFrom < sut.ExpirationDate);
}
I like to write many small getter properties that describe exactly what they mean.
However this can lead to repeated, expensive calculations.
Every way I know to avoid this makes the code less readable.
A pseudocode example:
ctor(decimal grossAmount, taxRateCalculator, itemCategory)
{
// store the ctor args as member variables
}
public decimal GrossAmount { get { return _grossAmount, } }
private decimal TaxRate { get { return _taxRateCalculater.GetTaxRateFor(_itemCategory); } } // expensive calculation
public decimal TaxAmount { get { return GrossAmount * TaxRate; } }
public decimal NetAmount { get { return GrossAmount - TaxAmount; } }
In this example it is very obvious what each of the properties do because they are simple accessors. TaxRate has been refactored into its own property so that it too, is obvious what it does. If the _taxRateCalculator operation is very expensive, how can I avoid repeated execution without junking up the code?
In my real scenario I might have ten fields that would need to be treated this way, so ten sets of _backing or Lazy fields would be ugly.
Cache the value
private decimal? _taxRate = null;
...
private decimal TaxRate
{
get
{
if (!this._taxRate.HasValue) {
this._taxRate = _taxRateCalculator.GetTaxRateFor(_itemCategory);
}
return this._taxRate.Value;
}
}
You could create a method which will recalculate your values every time you call here.
The advatage of this solution is
you can recalculate your Properties manually
you can implement some sort of Eventlistner which could easy call your recalculation anytime something happends
.
ctor(decimal grossAmount, taxRateCalculator, itemCategory)
{
// store the ctor args as member variables
recalculate();
}
public decimal GrossAmount { get; private set; }
public decimal TaxAmount { get; private set; }
public decimal NetAmount { get; private set; }
public void recalculate();
{
// expensive calculation
var _taxRate = _taxRateCalculater.GetTaxRateFor(_itemCategory);
GrossAmount = grossAmount;
TaxAmount = GrossAmount * _taxRate ;
NetAmount = GrossAmount - _taxRate;
}
Profile. Find out where the application is spending most of the time, and you'll have places to improve. Focus on the worst offenders, there's no point in performance optimizing something unless it has significant impact on the user experience.
When you identify the hotspots, you've got a decision to make - is the optimization worth the costs? If it is, go ahead. Having a backing field to store an expensive calculation is quite a standard way of avoiding repeated expensive calculations. Just make sure to only apply it where it matters, to keep the code simple.
This really is a common practice, and as long as you make sure you're consistent (e.g. if some change could change the tax rate, you'd want to invalidate the stored value to make sure it gets recalculated), you're going to be fine. Just make sure you're fixing a real performance problem, rather than just going after some performance ideal =)
I don’t think there is a way. You want to cache data without a cache. You could build caching functionality into your GetTaxRateFor(…) Method, but I doubt that this is the only expensive method you call.
So the shortest way would be a BackingField. What I usually do is this:
private decimal? _backingField;
public decimal Property { get { return _backingField ?? (_backingField = expensiveMethode()).Value; } }
I have the following basic entities:
public class Basket
{
public List<Product> Products {get;set;}
}
public class Product
{
public string Name {get;set;}
public decimal Price {get;set;}
}
And I want to get a list of all products in a basket that are below a fixed price. Should the logic for this go in the Basket, like so:
public class Basket
{
public List<Product> Products {get;set;}
public List<Product> CheapProducts
{
get { return Products.Where(p => p.Price < 5).ToList(); }
}
}
Or should it go in a service class, ProductFilterer, which would take the entire list of products as a parameter and would return a filtered list of products. Or maybe it should just go straight into the method of the calling class?
Or something else? What is the best practice for this?
What I would do is see with a domain expert if the notion of "cheap product" is a first class domain concept and has to be introduced in the ubiquitous language.
If this is the case, Steve's Specification solution solves your problem in an elegant way.
If cheapness is unimportant or not as clearly defined as that (for instance if the cheapness threshold varies across the application), I wouldn't bother creating a specific entity for it and just filter Basket.Products with the relevant criteria when needed in calling code.
You might consider looking into the Specification Pattern. The link has a good example implementation, but in short, the pattern allows you to create complex selection criteria based on simple predicates (or specifications).
A quick (and incomplete) implementation of such a pattern using delegates could be done as such:
public class Specification<T>
{
Func<T, bool> _spec;
public Specification(Func<T, bool> spec)
{
_spec = spec;
}
public bool IsSatisifedBy(T item)
{
return _spec(T);
}
}
// ...
_cheapProductsSpecification = new Specification<Product>(p => p.Price < 5);
var cheapProducts = Basket.Products.Where(p => _cheapProductsSpecification.IsSatifisifedBy(p));
This is, of course, a simple and probably redundant example, but if you add in And, Or, and Not (see the link), you can build complex business logic into specification variables.
Yes, I would suggest keeping your DTO's separate from the business logic. I like to think of the data objects as being a completely separate layer from the data access, business, and UI layers. If you had a more general ProductBusiness class, I would recommend just putting it in there unless it's really useful to have a separate filterer class.
Your Basket class should not know how to filter directly, it is correct for it to have an exposed function that allows it to return the results from a ProductFilter as you suggested. The way the code should look is something like this:
class ProductFilter
{
filterCheapProducts(Collection<Product> productsToFilter)
{
return Products.Where(p => p.Price < 5).ToList(); //I assume your code is correct
}
}
class Basket
{
Collection<Product> getCheapProducts()
{
return filter.filterCheapProducts(this.products);
}
}
I just need a bit of feedback regarding a problem I am trying to solve...
Here is a description of the problem :
My company sells some products for which the customer can pay over a certain period of time. The customers are classed as existing or new. In order to let the customers buy the product, we check the credit worthiness and on occasions a customer can be asked to deposit a bond, which is refundable. Some customers have a good payment history with us, so we don't need to charge them a bond amount. In order to implement the assessment, I have designed a solution as follows:
public interface ICreditAssessor
{
CreditAssessment Process();
Decimal CalculateBond(BondCalculator bc);
}
Two classes are defined which implement this interface.
public class GoodClientProcessor : ICreditAssessor{
..... methods
}
public class OtherClientProcessor : ICreditAssessor{
..... methods
}
There is a class which returns the appropriate processor depending on whether the customers have a good payment history with us or not.
Also, I have implemented a BondCalculator as follows:
public class BondCalculator
{
List<IRiskEvaluator> riskEvaluators;
public BondCalculator()
{
riskEvaluators = new List<IRiskEvaluator>();
}
public Decimal GetSuggestedBond()
{
Decimal riskAmount = 0;
foreach (IRiskEvaluator ire in riskEvaluators)
{
Decimal tempRisk = ire.EvaluateRisk();
if (tempRisk > riskAmount)
{
riskAmount = tempRisk;
}
}
return riskAmount;
}
public void SetRiskEvaluator(IRiskEvaluator re)
{
this.riskEvaluators.Add(re);
}
}
Interface IRiskEvaluator is as follows:
public interface IRiskEvaluator
{
Decimal EvaluateRisk();
}
The two classes implementing this interface are as follows:
public class FinancialRiskEvaluator : IRiskEvaluator
{
Decimal IRiskEvaluator.EvaluateRisk()
{
... calculate risk amount
}
}
and
public class ProductRiskEvaluator : IRiskEvaluator
{
Decimal IRiskEvaluator.EvaluateRisk()
{
... calculate risk amount
}
}
Now calling all this is done via a method. The relevant code is as below:
ICreditAssessor creditAssessor = CreditAssessorFactory.GetAssessor(somecriteria);
CreditAssessment assessment = creditAssessor.Process();
.
.
.
BondCalculator bc = new BondCalculator();
bc.SetRiskEvaluator(new FinancialRiskEvaluator(xmlResults));
bc.SetRiskEvaluator(new ProductRiskEvaluator(productCost));
creditCheckProcessor.CalculateBond(bc);
Is this design OK or can it be improved any further? One issue I see is that as the customers with good payment history do not need a bond, I still need to call the method CalculateBond and return 0 for the bond value. This somehow does not feel right. Can this somehow be improved upon? Any comments/suggestion are appreciated.
You could add a boolean BondRequired property to make the intent explicit, rather than depending on people to infer that "a bond of zero doesn't make much sense; the developer must have intended that result to represent no bond at all."
However, I agree with Magnum that this is already more complicated than seems necessary, so adding more members to the type may not be the best thing to do.
I have a C# function that does some processing and needs to return figures about what it has processed, success, failure, etc.
Can anyone tell me if there are any standard framework classes that will handle this (for example, number of records processed, number successful, failed, etc) ?
Alternatively, can anyone say how they usually handle this? Is the best way to create a class, or to use out parameters?
I don't think there is any standard class for representing this kind of information. The best way to handle this is probably to define your own class. In C# 3.0, you can use automatically implemented properties, which reduce the amount of code you need to write:
class Results {
public double Sum { get; set; }
public double Count { get; set; }
}
I think out parameters should be used only in relatively limited scenarios such as when defining methods similar to Int32.TryParse and similar.
Alternatively, if you need to use this only locally (e.g. call the function from only one place, so it is not worth declaring a new class to hold the results), you could use the Tuple<..> class in .NET 4.0. It allows you to group values of several different types. For example Tuple<double, double, double> would have properties Item1 ... Item3 of types double that you can use to store individual results (this will make the code less readable, so that's why it is useable only locally).
I don't think there is any built in classes for that. Usually I will create my own class to accommodate the kind of result you were talking about, and yes, I prefer a Result class instead of out parameters simply because I feel it's cleaner and I'm not forced to prepare variables and type in the out parameters every time I need to call that function.
I don't know if there is an known framework to do this but it's a common practice in all the new projects I'm involved now.
It looks like as a good practice to have a custom result class in order to give the method executor the proper results and/or result object.
Simple generic result class:
public partial class GenericResult
{
public IList<string> Errors { get; set; }
decimal Value { get; set; }
public GenericResult()
{
this.Errors = new List<string>();
}
public bool Success
{
get { return (this.Errors.Count == 0); }
}
public void AddError(string error)
{
this.Errors.Add(error);
}
}
A method that uses previous class as the return type:
public GenericResult CanDivideNumber(int a, int b)
{
GenericResult result = new GenericResult();
try
{
result.Value = a / b;
}
catch (Exception ex)
{
result.AddError(ex.ToString());
}
return result;
}
Usage example:
var result = CanDivideNumber(1, 0);
// Was the operation a success?
// result.Success
// Need to get error details?
// result.Errors
// The processing result?
// result.Value
11 years later, I can say that is is best to adopt the Result pattern with a result class if you are going this route. Using Tuple Task<Tuple<Status, T>> where Status might be an enum {Success,Failure,Exception} and T is a generic type for data returned works decently, but is a little less readable than implementing and returning a Result class.
Now, you still get into Task<Result<T1,T2,T3,T4,...>> expansion with Result having an embedded Status enum. Personally, I like to still have try-catch blocks that log/report my errors and suppress them. I then bubble up the result to, in my case, a controller where status codes are properly returned.
I don't think there is anything standard that will do this for you.
A great example of using out parameters, is the TryParse functions.
If you have two values to return, use one as the return value and one as an out parameter.
If you have any more, create your own return value class and return that.