I want to be able to generate distinct values based on a ICustomization using ISpecimenBuilder.CreateMany. I was wondering what would be the best solution since AutoFixture will generate the same values for all entities.
public class FooCustomization : ICustomization
{
public void Customize(IFixture fixture)
{
var specimen = fixture.Build<Foo>()
.OmitAutoProperties()
.With(x => x.CreationDate, DateTime.Now)
.With(x => x.Identifier, Guid.NewGuid().ToString().Substring(0, 6)) // Should gen distinct values
.With(x => x.Mail, $"contactme#mail.com")
.With(x => x.Code) // Should gen distinct values
.Create();
fixture.Register(() => specimen);
}
}
I've read this and that might be what I'm looking for. This approach has many drawbacks though: First it seems really counter intuitive to call Create<List<Foo>>() because it kinda of defeats the purpose of what to expect from CreateMany<Foo>; that would generate a hardcoded sized List<List<Foo>> (?). Another drawback is that I'd have to have two customizations for each entity; one to create custom collections and another one to create a single instance, since we are overriding the behaviour of Create<T> to create a collections.
PS.: The main goal is to reduce the amount of code on my tests, so I must avoid calling With() to customize the values for every test. Is there a proper way of doing this?
In general, a question like this triggers some alarm bells with me, but I'll give an answer first, and then save the admonitions till the end.
If you want distinct values, relying on randomness wouldn't be my first choice. The problem with randomness is that sometimes, a random process will pick (or produce) the same value twice in a row. Obviously, this depends on the range from which one wants to pick, but even if we consider that something like Guid.NewGuid().ToString().Substring(0, 6)) would be sufficiently unique for our use case, someone could come by later and change it to Guid.NewGuid().ToString().Substring(0, 3)) because it turned out that requirements changed.
Then again, relying on Guid.NewGuid() is good enough to ensure uniqueness...
If I interpret the situation here correctly, though, Identifier must be a short string, which means that you can't use Guid.NewGuid().
In such cases, I'd rather guarantee uniqueness by creating a pool of values from which you can draw:
public class RandomPool<T>
{
private readonly Random rnd;
private readonly List<T> items;
public RandomPool(params T[] items)
{
this.rnd = new Random();
this.items = items.ToList();
}
public T Draw()
{
if (!this.items.Any())
throw new InvalidOperationException("Pool is empty.");
var idx = this.rnd.Next(this.items.Count);
var item = this.items[idx];
this.items.RemoveAt(idx);
return item;
}
}
This generic class is just a proof of concept. If you wish to draw from a big pool, having to initialise it with, say, millions of values could be inefficient, but in that case, you could change the implementation so that the object starts with an empty list of values, and then add each randomly generated value to a list of 'used' objects each time Draw is called.
To create unique identifiers for Foo, you can customise AutoFixture. There's many ways to do this, but here's a way using an ISpecimenBuilder:
public class UniqueIdentifierBuilder : ISpecimenBuilder
{
private readonly RandomPool<string> pool;
public UniqueIdentifierBuilder()
{
this.pool = new RandomPool<string>("foo", "bar", "baz", "cux");
}
public object Create(object request, ISpecimenContext context)
{
var pi = request as PropertyInfo;
if (pi == null || pi.PropertyType != typeof(string) || pi.Name != "Identifier")
return new NoSpecimen();
return this.pool.Draw();
}
}
Add this to a Fixture object, and it'll create Foo objects with unique Identifier properties until the pool runs dry:
[Fact]
public void CreateTwoFooObjectsWithDistinctIdentifiers()
{
var fixture = new Fixture();
fixture.Customizations.Add(new UniqueIdentifierBuilder());
var f1 = fixture.Create<Foo>();
var f2 = fixture.Create<Foo>();
Assert.NotEqual(f1.Identifier, f2.Identifier);
}
[Fact]
public void CreateManyFooObjectsWithDistinctIdentifiers()
{
var fixture = new Fixture();
fixture.Customizations.Add(new UniqueIdentifierBuilder());
var foos = fixture.CreateMany<Foo>();
Assert.Equal(
foos.Select(f => f.Identifier).Distinct(),
foos.Select(f => f.Identifier));
}
[Fact]
public void CreateListOfFooObjectsWithDistinctIdentifiers()
{
var fixture = new Fixture();
fixture.Customizations.Add(new UniqueIdentifierBuilder());
var foos = fixture.Create<IEnumerable<Foo>>();
Assert.Equal(
foos.Select(f => f.Identifier).Distinct(),
foos.Select(f => f.Identifier));
}
All three tests pass.
All that said, though, I wish to add some words of caution. I don't know what your particular scenario is, but I'm also writing these warnings to other readers who may happen by this answer at a later date.
What's the motivation for wanting unique values?
There can be several, and I can only speculate. Sometimes, you need values to be truly unique, for example when you're modelling domain Entities, and you need each Entity to have a unique ID. In cases like that, I think this is something that should be modelled by the Domain Model, and not by a test utility library like AutoFixture. The easiest way to ensure uniqueness is to just use GUIDs.
Sometimes, uniqueness isn't a concern of the Domain Model, but rather of one or more test cases. That's fair enough, but I don't think it makes sense to universally and implicitly enforce uniqueness across all unit tests.
I believe that explicit is better than implicit, so in such cases, I'd rather prefer to have an explicit test utility method that would allow one to write something like this:
var foos = fixture.CreateMany<Foo>();
fixture.MakeIdentifiersUnique(foos);
// ...
This would allow you to apply the uniqueness constraints to those unit tests that require them, and not apply them where they're irrelevant.
In my experience, one should only add Customizations to AutoFixture if those Customizations make sense across a majority of the tests in a test suite. If you add Customizations to all tests just to support a test case for one or two tests, you could easily get brittle and unmaintainable tests.
Related
I'm creating unit tests in which I will be comparing lists of objects with one another.
Currently I am using Fluent assertions in combination with specflow and nunit. I already use the Fluent Assertions to make a comparison as following:
public void TestShizzle()
{
// I normally retrieve these lists from a moq database or a specflow table
var expected = list<myObject>
{
new myObject
{
A = 1,
B = "abc"
}
}
var found = list<myObject>
{
new myObject
{
A = 1,
B = "def"
}
}
// this comparison only compares a few columns. The comparison is also object dependent. I would like to make this dynamic
found.Should().BeEquivalentTo(
expected,
options =>
options.Including(x => x.A));
}
What I really want is to be able to use generics instead of a specified type. I also want to decide which properties to compare at compile time. This is because of the large number of tables in the database. I think i need to use Linq Expressions for this, but I don't know how to go about this. The function should look something like this:
public void GenericShizzle<T>(List<T> expected, List<T> found, IEnumerable<PropertyInfo> properties)
{
Expression<Func<T, object>> principal;
foreach(var property in properties)
{
// create the expression for including fields
}
found.Should().BeEquivalentTo(
expected,
options =>
// here is need to apply the expression.
}
I have no real idea how to get the correct expression for the job, or if this even the best method. I think I need to create an property expression that is understood by the include function, but maybe a different method can be used?
There is Including method overload accepting Expression<Func<IMemberInfo, bool>> which can be used to dynamically filter members based on information about them:
IEnumerable<PropertyInfo> properties = ...;
var names = properties
.Select(info => info.Name)
.ToHashSet();
found.Should()
.BeEquivalentTo(expected,
options => options.Including((IMemberInfo mi) => names.Contains(mi.Name))); // or just .Including(mi => names.Contains(mi.Name))
I would like to test a self written XML parser that takes, well an XML string and returns the model representation of that.
T Parse(string content);
The issue I am having is regarding the assertion part of my test. Because each time I call Create<T>() it generates new random data, which is not what I want in that case. I kind of need a common testdata set that i can use in the following order:
a) Generate XML string that can be passed to my parser
b) Generate model representation using the same test data set
c) Compare XML parser results with the generated model representation and Assert.AreEqual()
I came across the Freeze<T>() method which "sounds" like it could fit my purpose. However I have no idea on how to use it.
So the question is: How can I use a common testdata set for the generation of different objects?
This is my current approach and static test data generator class.
public static class TestDataGenerator
{
public static string GenerateSyntheticXmlTestData<T>(int minOid, int maxOid, int amount = 5)
{
var fixture = new Fixture()
{
RepeatCount = amount
};
fixture.Customizations.Add(new OidGenerator(minOid, maxOid));
fixture.Customizations.Add(new EnableAllProperties());
var testData = fixture.Create<T>();
var serializedXmlTestData = XmlSerializerHelper.Current.SerializeToXmlDocument(testData, Encoding.UTF8);
return serializedXmlTestData;
}
public static ICollection<T> GenerateSyntheticModelTestData<T>(int minOid, int maxOid, int amount = 1)
{
var fixture = new Fixture()
{
RepeatCount = 1
};
fixture.Customizations.Add(new OidGenerator(minOid, maxOid));
var testData = fixture.CreateMany<T>(amount).ToList();
return testData;
}
}
And this is they way I would like to test the parser. I hope its clear what I am trying to achieve.
[Fact]
public void ShouldParse()
{
// [...]
var xmlContent = TestDataGenerator.GenerateSyntheticXmlTestData<MyType>(minOid: 1, maxOid: 100, amount: 5);
// Here I would like to generate a model object using the same data
//
// var modelContent = new Fixture().Create<ModelType>();
var parsedContent = parser.Parse(xmlContent);
//parsedContent.Should().BeEquivalentTo(modelContet);
}
When testing parsers, I often find it easiest to take a page from the property-based testing playbook. Many of the techniques useful for property-based testing are also useful with AutoFixture.
When doing property-based testing of parsing logic, it's often useful to define a serializer that goes with the parser. That is, a function that can turn a given model into the format that the parser parses. In this case, it would be an XML serializer.
It's often much easier to instruct AutoFixture (or a property-based testing library) to create valid instances of 'model' objects, rather than instructing it to produce valid XML strings.
Once you've set up AutoFixture to do that, you let it create instances of your model, then serialize the model, and let the parser parse the serialized model. The assertion is that the parsed model should be equal to the input model.
Scott Wlaschin calls this test pattern There and back again. You can also see an example of it on my blog, using FsCheck.
With AutoFixture, it might look something like this:
[Fact]
public void RoundTrippingWorks()
{
var fixture = new Fixture().Customize(/*...*/);
var model = fixture.Create<MyModel>();
string xml = MyXmlSerializer.Serialize(model);
MyModel actual = MyXmlParser.Parse(xml);
Assert.Equal(model, actual);
}
(I haven't tried to compile that, so there could be typos...)
I am not 100% sure if this is what you are looking for, but maybe creating a custom fixture with customized types for your XML-Data is an option?
public class CustomFixture : Fixture
{
Customize<YourXmlType>(c => c.Without(f => f.XmlStringThatShouldNotBeGenerated));
Customize<YourXmlType>(c => c.Do(f => f.XmlStringThatShouldNotBeGenerated = "Your shared xml string"));
}
This could also work with c.With instead of Without and Do, but I had issues with that in a project some time ago and the above solution turned out to be more reliable for me.
I recently had the following bug in my code which took me for ever to debug. I wanted to inject an instance based on its interface like this:
MovementController(IMotorController motorController)
However I accidentally used the concrete type like this:
MovementController(MotorController motorController)
The project still built and ran fine until I tried to access the motorController from the MovementController instance. Since the underlying implementation of IMotorController accesses hardware, it has to be a singleton or my locks code. However, since I had other classes with the injected IMotorController, I now had two instances MotorController in my object-graph, which both accessed the hardware over a serial connection. This caused an error, at run time at a much lower level, which took me forever to debug and find the true cause.
How can I avoid this type of bug or write a unit test for my StructureMap registry to catch this subtle bug?
You could easily check for this using a static analysis tool like NDepend. With it you would just look for types that were controllers and then inspect their constructors and warn if you found any constructor parameters that were not interface types.
Just to refine the Steve answer, you could write a code rule that could look like: (with NDepend a code rule is a C# LINQ query prefixed with warnif count > 0)
// <Name>Don't use MotorController, use IMotorController instead</Name>
warnif count > 0
from m in Application.Methods
where m.IsUsing ("NamespaceA.MotorController ") &&
m.ParentType.FullName != "NamespaceB.ClassThatCanUseMotorController "
select m
The rule can be refined easily if there are zero or many ClassThatCanUseMotorController.
The safest solution is to check during runtime that only one instance of MotorController is created. For instance you could count the number of instances of MotorController with a static counter variable:
public class MotorController : IMotorController
{
private static bool instantiated;
public MotorController(...)
{
if (instantiated)
throw new InvalidOperationException(
"MotorController can only be instantiated once.")
...
instantiated = true;
}
...
}
I'd usually consider this bad design, because whether a class is used as a singleton or not is something only the dependency injection framework should care about. Also note that this is not thread-safe.
Ok. So the solution I came up with for my Unit Test, is to get all the instances that implement IMotorController and assert that their count equals 1:
var motorControllerInstances = container.GetAllInstances<IMotorController>().Select(x => x); // cast enumerable to List using Linq
Assert.True(motorControllerInstances.Count == 1);
Not sure this is the most elegant way, but it seems to work.
Update 1:
This code does not catch the bug I had. I am still looking for a correct answer to my problem.
Update 2: I am getting closer. This will at least catch if you have accidentally registered a concrete type of the corresponding interface. However, it does not appear to check whether an instance of it was actually built.
var allInterfaceInstances = dicFixture.result.Model.GetAllPossible<IMotorController>();
Assert.True(allInterfaceInstance.Count() == 1);
In trying to adhere to the D in SOLID
Dependency inversion principle where
one should “Depend upon Abstractions. Do not depend upon concretions
for a project. In this case Asp.Net-MVC5, I wanted a way to make sure that all controllers (MVC and WebAPI2) were following this pattern where they were not dependent on concretions.
The original idea came from an article I had read where a unit test was created to scan all controllers to make sure that they had explicit authorization defined. I applied a similar thinking in checking that all controllers had constructors that depended on abstractions.
[TestClass]
public class ControllerDependencyTests : ControllerUnitTests {
[TestMethod]
public void All_Controllers_Should_Depend_Upon_Abstractions() {
var controllers = UnitTestHelper.GetAssemblySources() //note this is custom code to get the assemblies to reflect.
.SelectMany(assembly => assembly.GetTypes())
.Where(t => typeof(IController).IsAssignableFrom(t) || typeof(System.Web.Http.Controllers.IHttpController).IsAssignableFrom(t));
var constructors = controllers
.SelectMany(type => type.GetConstructors())
.Where(constructor => {
var parameters = constructor.GetParameters();
var result = constructor.IsPublic
&& parameters.Length > 0
&& parameters.Any(arg => arg.ParameterType.IsClass && !arg.ParameterType.IsAbstract);
return result;
});
// produce a test failure error mssage if any controllers are uncovered
if (constructors.Any()) {
var errorStrings = constructors
.Select(c => {
var parameters = string.Join(", ", c.GetParameters().Select(p => string.Format("{0} {1}", p.ParameterType.Name, p.Name)));
var ctor = string.Format("{0}({1})", c.DeclaringType.Name, parameters);
return ctor;
}).Distinct();
Assert.Fail(String.Format("\nType depends on concretion instead of its abstraction.\n{0} Found :\n{1}",
errorStrings.Count(),
String.Join(Environment.NewLine, errorStrings)));
}
}
}
So given the following example. (Note I adapted this to MVC)
public class MovementController : Controller {
public MovementController(MotorController motorController) {
//...
}
}
public interface IMotorController {
//...
}
public class MotorController : IMotorController {
//...
}
the unit test would fail with ...
Result Message: Assert.Fail failed.
Type depends on concretion instead of its abstraction.
1 Found :
MovementController(MotorController motorController)
This worked for me because I had a common type to look for with the IController and ApiController.
There is room for improvement on the test but is should be a good starting point for you.
I have a base class (abstract) with multiple implementations, and some of them contain collection properties of other implementations - like so:
class BigThing : BaseThing
{
/* other properties omitted for brevity */
List<SquareThing> Squares { get; set; }
List<LittleThing> SmallThings { get; set;}
/* etc. */
}
Now sometimes I get a BigThing and I need to map it to another BigThing, along with all of its collections of BaseThings. However, when this happens, I need to be able to tell if a BaseThing in a collection from the source BigThing is a new BaseThing, and thus should be Add()-ed to the destination BigThing's collection, or if it's an existing BaseThing that should be mapped to one of the BaseThings that already exist in the destination collection. Each implementation of BaseThing has a different set of matching criteria on which it should be evaluated for new-ness. I have the following generic extension method to evaluate this:
static void UpdateOrCreateThing<T>(this T candidate, ICollection<T> destinationEntities) where T : BaseThing
{
var thingToUpdate = destinationEntites.FirstOrDefault(candidate.ThingMatchingCriteria);
if (thingToUpdate == null) /* Create new thing and add to destinationEntities */
else /* Map thing */
}
Which works fine. However I think I am getting lost with the method that deals in BigThings. I want to make this method generic because there are a few different kinds of BigThings, and I don't want to have to write methods for each, and if I add collection properties I don't want to have to change my methods. I have written the following generic method that makes use of reflection, but it is not
void MapThing(T sourceThing, T destinationThing) where T : BaseThing
{
//Take care of first-level properties
Mapper.Map(sourceThing, destinationThing);
//Now find all properties which are collections
var collectionPropertyInfo = typeof(T).GetProperties().Where(p => typeof(ICollection).IsAssignableFrom(p.PropertyType));
//Get property values for source and destination
var sourceProperties = collectionPropertyInfo.Select(p => p.GetValue(sourceThing));
var destinationProperties = collectionPropertyInfo.Select(p => p.GetValue(destinationThing));
//Now loop through collection properties and call extension method on each item
for (int i = 0; i < collectionPropertyInfo.Count; i++)
{
//These casts make me suspicious, although they do work and the values are retained
var thisSourcePropertyCollection = sourceProperties[i] as ICollection;
var sourcePropertyCollectionAsThings = thisSourcePropertyCollection.Cast<BaseThing>();
//Repeat for destination properties
var thisDestinationPropertyCollection = destinationProperties[i] as ICollection;
var destinationPropertyCollectionAsThings = thisDestinationPropertyCollection.Cast<BaseThing>();
foreach (BaseThing thing in sourcePropertyCollectionAsThings)
{
thing.UpdateOrCreateThing(destinationPropertyCollectionAsThings);
}
}
}
This compiles and runs, and the extension method runs successfully (matching and mapping as expected), but the collection property values in destinationThing remain unchanged. I suspect I have lost the reference to the original destinationThing properties with all the casting and assigning to other variables and so on. Is my approach here fundamentally flawed? Am I missing a more obvious solution? Or is there some simple bug in my code that's leading to the incorrect behavior?
Without thinking too much, I'd say you have fallen to a inheritance abuse trap, and now trying to save yourself, you might want to consider how can you solve your problem while ditching the existing design which leads you to do such things at the first place. I know, this is painful, but it's an investment in future :-)
That said,
var destinationPropertyCollectionAsThings =
thisDestinationPropertyCollection.Cast<BaseThing>();
foreach (BaseThing thing in sourcePropertyCollectionAsThings)
{
thing.UpdateOrCreateThing(destinationPropertyCollectionAsThings);
}
You are losing your ICollection when you use Linq Cast operator that creates the new IEnumerable<BaseThing>. You can't use contravariance either, because ICollectiondoes not support it. If it would, you'd get away with as ICollection<BaseThing> which would be nice.
Instead, you have to build the generic method call dynamically, and invoke it. The simplest way is probably using dynamic keyword, and let the runtime figure out, as such:
thing.UpdateOrCreateThing((dynamic)thisDestinationPropertyCollection);
For example, consider a utility class SerializableList:
public class SerializableList : List<ISerializable>
{
public T Add<T>(T item) where T : ISerializable
{
base.Add(item);
return item;
}
public T Add<T>(Func<T> factory) where T : ISerializable
{
var item = factory();
base.Add(item);
return item;
}
}
Usually I'd use it like this:
var serializableList = new SerializableList();
var item1 = serializableList.Add(new Class1());
var item2 = serializableList.Add(new Class2());
I could also have used it via factoring, like this:
var serializableList = new SerializableList();
var item1 = serializableList.Add(() => new Class1());
var item2 = serializableList.Add(() => new Class2());
The second approach appears to be a preferred usage pattern, as I've been lately noticing on SO. Is it really so (and why, if yes) or is it just a matter of taste?
Given your example, the factory method is silly. Unless the callee requires the ability to control the point of instantiation, instantiate multiple instances, or lazy evaluation, it's just useless overhead.
The compiler will not be able to optimize out delegate creation.
To reference the examples of using the factory syntax that you gave in comments on the question. Both examples are trying (albeit poorly) to provide guaranteed cleanup of the instances.
If you consider a using statement:
using (var x = new Something()) { }
The naive implementation would be:
var x = new Something();
try
{
}
finally
{
if ((x != null) && (x is IDisposable))
((IDisposable)x).Dispose();
}
The problem with this code is that it is possible for an exception to occur after the assignment of x, but before the try block is entered. If this happens, x will not be properly disposed, because the finally block will not execute. To deal with this, the code for a using statement will actually be something more like:
Something x = null;
try
{
x = new Something();
}
finally
{
if ((x != null) && (x is IDisposable))
((IDisposable)x).Dispose();
}
Both of the examples that you reference using factory parameters are attempting to deal with this same issue. Passing a factory allows for the instance to be instantiated within the guarded block. Passing the instance directly allows for the possibility of something to go wrong along the way and not have Dispose() called.
In those cases, passing the factory parameter makes sense.
Caching
In the example you have provided it does not make sense as others have pointed out. Instead I will give you another example,
public class MyClass{
public MyClass(string file){
// load a huge file
// do lots of computing...
// then store results...
}
}
private ConcurrentDictionary<string,MyClass> Cache = new ....
public MyClass GetCachedItem(string key){
return Cache.GetOrAdd(key, k => new MyClass(key));
}
In above example, let's say we are loading a big file and we are calculating something and we are interested in end result of that calculation. To speedup my access, when I try to load files through Cache, Cache will return me cached entry if it has it, only when cache does not find the item, it will call the Factory method, and create new instance of MyClass.
So you are reading files many times, but you are only creating instance of class that holds data just once. This pattern is only useful for caching purpose.
But if you are not caching, and every iteration requires to call new operator, then it makes no sense to use factory pattern at all.
Alternate Error Object or Error Logging
For some reason, if creation fails, List can create an error object, for example,
T defaultObject = ....
public T Add<T>(Func<T> factory) where T : ISerializable
{
T item;
try{
item = factory();
}catch(ex){
Log(ex);
item = defaultObject;
}
base.Add(item);
return item;
}
In this example, you can monitor factory if it generates an exception while creating new object, and when that happens, you Log the error, and return something else and keep some default value in list. I don't know what will be practical use of this, but Error Logging sounds better candidate here.
No, there's no general preference of passing the factory instead of the value. However, in very particular situations, you will prefer to pass the factory method instead of the value.
Think about it:
What's the difference between passing the parameter as a value, or
passing it as a factory method (e.g. using Func<T>)?
Answer is simple: order of execution.
In the first case, you need to pass the value, so you must obtain it before calling the target method.
In the second case, you can postpone the value creation/calculation/obtaining till it's needed by the target method.
Why would you want to postpone the value creation/calculation/obtaining? obvious things come to mind:
Processor-intensive or memory-intensive creation of the value, that you want to happen only in case the value is really needed (on-demand). This is Lazy loading then.
If the value creation depends on parameters that are accessible by the target method but not from outside of it. So, you would pass Func<T, T> instead of Func<T>.
The question compares methods with different purposes. The second one should be named CreateAndAdd<T>(Func<T> factory).
So depending what functionality is required, should be used one or another method.