I installed the Faker Port for C# (https://github.com/oriches/faker-cs) in my project but the project site doesn't give examples of usage.
Can someone post some examples of basic mock data generation?
In this example, I'm using a project that uses MVC4, EF Code First and Automatic Migrations. So if you're using the same, your Migrations\Configuration.cs file should be like this:
internal sealed class Configuration : DbMigrationsConfiguration<MyProject.Models.MyProjectContext>
{
public Configuration()
{
AutomaticMigrationsEnabled = true;
}
...
For single elements, the example is trivial:
protected override void Seed(MyProject.Models.MyProjectContext context)
{
context.Customers.AddOrUpdate(
c => c.Name,
new Customer { Name = Faker.Name.FullName() }
);
context.SaveChanges();
...
For a defined number of elements, I liked the idea of using a lambda expression, like Factory Girl (for Ruby on Rails) does. In another question I made (DbMigrations in .NET w/ Code First: How to Repeat Instructions with Expressions) the answer uses Enumberable.Range() to specify the elements amount:
protected override void Seed(MyProject.Models.MyProjectContext context)
{
context.Companies.AddOrUpdate(
p => p.Name,
Enumerable.Range(1, 10).
Select( x => new Company { Name = Faker.Company.Name() }).ToArray()
);
context.SaveChanges();
}
...
There is a scarcity of resources available, yet this article seems to be what one would expect.
Also try using the Assembly/Object Browser to look at available resources (e.g. which Classes, Methods and so on are contained in the library). Furthermore, the library contains NUnit tests, thus a look at the tests' code may also prove beneficial.
As Faker.NET is a port of the Ruby library called ffaker, one may also imply that the code is similar in intentions, therefore one could also use ffaker's unit tests as a small reference.
Related
BDD naming approach works perfectly when there's one method in a class that you're testing. Let's assume we have a Connector class which has Connect method:
Should_change_status_to_Connected_if_Disconnected
Beautiful, right? But I feel confused when I have to name tests when there're several methods in a class (let's assume we added Disconnect method to our class).
I see two possible solutions. The first one is to add a prefix with a method name like:
Should_change_status_to_Connected_if_Disconnected_when_Connect_was_called
Another approach is to introduce nested test classes for each method you're testing.
public class ConnectorTests
{
public class ConnectTests
{
public void Should_change_status_to_Connected_if_Disconnected()
{
...
}
}
public class DisconnectTests
{
public void Should_change_status_to_Disconnected_if_Connected()
{
...
}
}
}
Honestly both approaches feel a little bit off (may be just because I'm not used to it). What's the recommended way to go?
I've written dosens tests using different naming styles. Essentially, such test methods is hard to read due to long names, they exceed limit of symbols per line, often underscored names of methods go against naming conventions. Difficulties begin when you want to add "And" conditions or preconditions to your BDD scenarios, like "When Connector is initialized Should change status to Connected if Disconnected AND network is available AND argument1 is... AND argument2 is...". So you have to group your test cases in many classes, sub-folders etc. It increases time of development and support.
Alternative way in C# is writing tests like with JavaScript testing frameworks: Jasmine, Jest etc. For unit tests for classes and methods, I'd use Arrange/Act/Assert style, and BDD style for Feature/Story scenarios, but both styles can be used. In C# I use my Heleonix.Testing.NUnit library and write tests in AAA or BDD (GWT) styles:
using NUnit.Framework;
using Heleonix.Testing.NUnit.Aaa;
using static Heleonix.Testing.NUnit.Aaa.AaaSpec;
[ComponentTest(Type = typeof(Connector))]
public static class ConnectorTests
{
[MemberTest(Name = nameof(Connector.Connect))]
public static void Connect()
{
Connector connector = null;
Arrange(() =>
{
connector = new Connector();
});
When("the Connect is called", () =>
{
Act(() =>
{
connector.Connect(options);
});
And("the Connector is disconnected", () =>
{
Arrange(() =>
{
connector.Disconnect();
});
});
Should("change the status to Disconnected", () =>
{
Assert.That(connector.Disconnected, Is.True);
});
});
}
}
For me important is that in few months later I can open such tests and clearly recall what was written there, and don't sit hours to understand what/how it tests.
In my case, first, I try to separate classes depending pre and post conditions, so I can group some behaviors and keep together the related things. For example, In your case, one precondition could be "Disconnected", so, you can prepare the "disconnected environment" using attributes like ClassInitialize, TestInitialize, TestCleanup, ClassCleanup, etc. (here some examples in MSDN)
And please, as the other developers has recommended, don't forget the naming conventions.
Hope this help, greetings.
Since test cases are totally independent of each other you must to use a static class to initialize those values, connections, etc that you will use for your test later. If you want to use individuals values and initiators yo must to declare them in your classes individually. I use for this nunit framework.
And by the way, you are in c#, use the name convention of .net developers...
I am new to unit testing and mocking. The project I am working on has many methods that look like this:
public bool MyMethod(int param, int param2)
{
using (SomeEntity dBcontext = new SomeEntity())
{
FancyObj theobj = dBcontext.MyObjs.FirstOrDefault(l => l.ObjId == param2);
if (theobj != null && theobj.CurrentSeason != param) //if season has changed then update
{
theobj .CurrentSeason = param;
dBcontext.SaveChanges();
return true;
}
return false;
}
}
I am using Telerik JustMock and unless I am missing something, there is no way for me to Mock the entity call since its being instantiated directly within the method in test.
Is my only solution to modify the method/class to hold a property of type SomeEntity?
Newing up an instance of required dependency (instead of asking for it in the constructor) kills testability. Essentially, by using new operator, you are mixing the concern of application instantiation with concern of application logic.
Dependency injection to the rescue! Your class under test should ask for all the things required to get its job done in the constructor and rely on interfaces, not the concrete implementations. That way, you will be able to provide fake implementations to make your unit test fully isolated.
Without refactoring your code I think you may be out of luck with conventional mock frameworks, since all Mock Frameworks depend on methods being either virtual or an interface.
But if you own Visual Studio Premium or Ultimate edition you may use Microsoft Fakes which allow you modify/replace/intercept calls to non-virtual methods and properties (works by modifying/injecting CIL code into that 3rd assemblies when they are loaded).
Whilst Dependency injection is probably the better way to go in the long term (it'll typically improve the structure of your code), there are commercial solutions like Typemock that allow you to test code that can't be tested in a conventional way. This is a bit of a mixed blessing, as you can become dependant on the mocking framework and don't necessarily reap the structural changes that can be encouraged by unit testing with more traditional mocking frameworks. However it should allow you to test the situation you're describing.
An example from their website shows how they are able to get a handle to the object created within their Calculate function. As is illustrated in some of their other examples, this handle can then be used to setup expectations on the calls to that dependency:
public static int Calculate(int a, int b)
{
var dependency = new Dependency();
dependency.CheckSecurity("typemock", "rules");
return a + b;
}
[TestMethod,Isolated]
public void FakeConstructor()
{
// Fake the Dependency constructor
var fakeHandle = Isolate.Fake.NextInstance<dependency>();
var result = ClassUnderTest.Calculate(1, 2);
Assert.AreEqual(3, result);
}
There are various different pricing options and I'm not sure what you get for each tier, however if you really don't want to change your code and you have fairly deep pockets it may be worth considering.
Yes, it is possible to arrange the return value of a new expression using the commercial version of JustMock.
var mock = Mock.Create<SomeEntity>();
Mock.Arrange(() => new SomeEntity()).Returns(mock);
From here on you can arrange the mock to have the behavior you need for the test. If you're using EF6, then you can try Telerik.JustMock.EntityFramework that plugs into JustMock to create in-memory mock DB contexts for your tests.
I have the following code. I have a somewhat legitimate reason for stubbing that property twice (See explanation below). It looks like it's only letting me stub it once.
private IStatus _status;
[SetUp()]
public void Setup() {
this._status = MockRepository.GenerateStub<IStatus>();
this._status.Stub(x => x.Connected()).Return(true);
// This next line would usually be in the Setup for a subclass
this._status.Stub(x => x.Connected()).Return(false);
}
[Test()]
public void TestTheTestFramework() {
Assert.IsFalse(this._status.Connected()); // Fails...
}
public interface IStatus {
bool Connected { get; }
}
I tried downloading the most recent build (3.6 build 21), but still have the same issue. Any ideas on why I can't do this? I tried changing the Connected property on IStatus to be a function and the test still failed. I get the same behavior in VB.Net... Bug?
Explanation on the double-stubbing
I'm structuring my tests around inheritance. That way I can do common setup code just once, using injected mocked dependencies to simulate different conditions. I might provide a base/default stubbed value (e.g. yes, we're connected) which I'd want to override in the subclass that tests the behavior of the SUT when the connection is down. I usually end up with code like this.
[TestFixture()]
public class WhenPublishingAMessage {
// Common setup, inject SUT with mocked dependencies, etc...
[Test()]
public void ShouldAlwaysWriteLogMessage {
//Example of test that would pass for any sub-condition
}
[TestFixture()]
public class AndNoConnection : WhenPublishingAMessage {
// Do any additional setup, stub dependencies to simulate no connection
// Run tests for this condition
}
[TestFixture()]
public class AndHaveConnection : WhenPublishingAMessage {
// Do any additional setup and run tests for this condition
}
}
Edit
This post on the Rhino Mocks google group might be helpful. It looks like I might need to call this._status.BackToRecord(); to reset the state, so to speak... also, tacking on .Repeat.Any() to the second stub statement seemed to help as well. I'll have to post more details later.
You can specify .Repeat.Once() on the first result so that it will be used once and then the next one, as explained in this other stack overflow question
To sum everything up, there's a three different answers that are possible:
Tallmaris's answer:
Specify a specific number of times to return on the first stub using .Repeat.Times(n), .Repeat.Once(), .Repeat.Twice(), etc. For example:
this._status.Stub(x => x.Connected()).Return(true).Repeat.Once();
this._status.Stub(x => x.Connected()).Return(false);
This method works pretty well if I know the number of times the stub will get called before I change it's behavior (e.g. it just gets called once in the constructor).
Reset the mocked object
I don't like this method since I'd like to avoid the (at least to me) more cumbersome Expect/Verify Record/Replay type syntax. It was recommending to me in response to a post I made to the Rhino Mocks Google group with the same title as this question.
this._status.Stub(x => x.Connected).Return(true);
this._status.GetMockRepository().BackToRecordAll();
this._status.GetMockRepository().ReplayAll();
this._status.Stub(x => x.Connected).Return(false);
Using the magical Repeat.Any
I found that using .Repeat.Any() on the second stub overrode the first one work... I feel a bit bad adding some extra 'magic' code to make it work, but in the case where you don't know how often to tell the first stub to return, this option will work.
this._status.Stub(x => x.Connected()).Return(true);
this._status.Stub(x => x.Connected()).Return(false).Repeat.Any();
Note: you can't do .Repeat.Any() more than once.
I have to be able to connect to two different versions of the an API (1.4 and 1.5), lets call it the Foo API. And my code that connects to the API and processes the results is substantially duplicated - the only difference is the data types returned from the two APIs. How can I refactor this to remove duplication?
In Foo14Connector.cs (my own class that calls the 1.4 API)
public class Foo14Connector
{
public void GetAllCustomers()
{
var _foo = new Foo14WebReference.FooService();
Foo14WebReference.customerEntity[] customers = _foo.getCustomerList;
foreach (Foo14WebReference.customerEntity customer in customers)
{
GetSingleCustomer(customer);
}
}
public void GetSingleCustomer(Foo14WebReference.customerEntity customer)
{
var id = customer.foo_id;
// etc
}
}
And in the almost exact duplicate class Foo15Connector.cs (my own class that calls the 1.5 API)
public class Foo15Connector
{
public void GetAllCustomers()
{
var _foo = new Foo15WebReference.FooService();
Foo15WebReference.customerEntity[] customers = _foo.getCustomerList;
foreach (Foo15WebReference.customerEntity customer in customers)
{
GetSingleCustomer(customer);
}
}
public void GetSingleCustomer(Foo15WebReference.customerEntity customer)
{
var id = customer.foo_id;
// etc
}
}
Note that I have to have two different connectors because one single method call (out of hundreds) on the API has a new parameter in 1.5.
Both classes Foo14WebReference.customerEntity and Foo15WebReference.customerEntity have identical properties.
If the connectors are in different projects, this is an easy situation to solve:
Add a new class file, call it ConnectorCommon and copy all of the common code, but with the namespaces removed. Make this class a partial class and rename the class (not the file) to something like Connector.
You will need to add a link to this to each project.
Next, remove all of the code from your current connector classes, rename the class (not necessarily the file) to the same as the partial class, and add a using statement that references the namespace.
This should get what you are looking for.
So, when you are done you will have:
File ConnectorCommon:
public partial class Connector
{
public void GetAllCustomers()
{
var _foo = new FooService();
customerEntity[] customers = _foo.getCustomerList;
foreach (customerEntity customer in customers)
{
GetSingleCustomer(customer);
}
}
public void GetSingleCustomer(customerEntity customer)
{
var id = customer.foo_id;
// etc
}
}
File Magento15Connector
using Foo15WebReference;
partial class Connector
{
}
File Magento14Connector
using Foo14WebReference;
partial class Connector
{
}
Update
This process can be a little confusing at first.
To clarify, you are sharing source code in a common file between two projects.
The actual classes are the specific classes with the namespaces in each project. You use the partial keyword to cause the common file to be combined with the actual project file (i.e. Magneto14) in each project to create the full class within that project at compile time.
The trickiest part is adding the common file to both projects.
To do this, select the Add Existing Item... menu in the second project, navigate to the common file and click the right-arrow next to the Add button.
From the dropdown menu, select Add as link. This will add a reference to the file to the second project. The source code will be included in both projects and any changes to the common file will be automatically available in both projects.
Update 2
I sometimes forget how easy VB makes tasks like this, since that is my ordinary programming environment.
In order to make this work in C#, there is one more trick that has to be employed: Conditional compilation symbols. It makes the start of the common code a little more verbose than I would like, but it still ensures that you can work with a single set of common code.
To employ this trick, add a conditional compilation symbol to each project (ensure that it is set for All Configurations). For example, in the Magento14 project, add Ver14 and in the Magento15 project add Ver15.
Then in the common file, replace the namespace with a structure similar to the following:
#if Ver14
using Magneto14;
namespace Magento14Project
#elif Ver15
using Magneto15;
namespace Magento15Project
#endif
This will ensure that the proper namespace and usings are included based on the project the common code is being compiled into.
Note that all common using statements should be retained in the common file (i.e., enough to get it to compile).
If the FooConnectors are not sealed and you are in control to create new instances, then you can derive your own connectors and implement interfaces at the same time. In c# you can implement members by simply inheriting them from a base class!
public IFooConnector {
void GetAllCustomers();
}
public MyFoo14Connector : Foo14Connector, IFooConnector
{
// No need to put any code in here!
}
and then
IFooConnector connector = new MyFoo14Connector();
connector.GetAllCustomers();
You should introduce an interface that is common to both of the implementations. If the projects are written in the same language and are in different projects, you can introduce a common project that both projects reference. You are then making a move towards having dependencies only on your interface which should allow you to swap in different implementations behind the scenes somewhere using inversion of control (google, dependency injection or service locator or factory pattern).
Difficulties for you could be:
1) Public static methods in the implementations are not able to be exposed staticly via an interface
2) Potentially have code in one implementation class ie Foo14Connector or Foo15Connector that doesnt make sense to put into a generic interface
Consider a following chunk of service:
public class ProductService : IProductService {
private IProductRepository _productRepository;
// Some initlization stuff
public Product GetProduct(int id) {
try {
return _productRepository.GetProduct(id);
} catch (Exception e) {
// log, wrap then throw
}
}
}
Let's consider a simple unit test:
[Test]
public void GetProduct_return_the_same_product_as_getProduct_on_productRepository() {
var product = EntityGenerator.Product();
_productRepositoryMock.Setup(pr => pr.GetProduct(product.Id)).Returns(product);
Product returnedProduct = _productService.GetProduct(product.Id);
Assert.AreEqual(product, returnedProduct);
_productRepositoryMock.VerifyAll();
}
At first it seems that this test is ok. But let's change our service method a little bit:
public Product GetProduct(int id) {
try {
var product = _productRepository.GetProduct(id);
product.Owner = "totallyDifferentOwner";
return product;
} catch (Exception e) {
// log, wrap then throw
}
}
How to rewrite a given test that it'd pass with the first service method and fail with a second one?
How do you handle this kind of simple scenarios?
HINT 1: A given test is bad coz product and returnedProduct is actually the same reference.
HINT 2: Implementing equality members (object.equals) is not the solution.
HINT 3: As for now, I create a clone of the Product instance (expectedProduct) with AutoMapper - but I don't like this solution.
HINT 4: I'm not testing that the SUT does NOT do sth. I'm trying to test that SUT DOES return the same object as it is returned from repository.
Personally, I wouldn't care about this. The test should make sure that the code is doing what you intend. It's very hard to test what code is not doing, I wouldn't bother in this case.
The test actually should just look like this:
[Test]
public void GetProduct_GetsProductFromRepository()
{
var product = EntityGenerator.Product();
_productRepositoryMock
.Setup(pr => pr.GetProduct(product.Id))
.Returns(product);
Product returnedProduct = _productService.GetProduct(product.Id);
Assert.AreSame(product, returnedProduct);
}
I mean, it's one line of code you are testing.
Why don't you mock the product as well as the productRepository?
If you mock the product using a strict mock, you will get a failure when the repository touches your product.
If this is a completely ridiculous idea, can you please explain why? Honestly, I'd like to learn.
One way of thinking of unit tests is as coded specifications. When you use the EntityGenerator to produce instances both for the Test and for the actual service, your test can be seen to express the requirement
The Service uses the EntityGenerator to produce Product instances.
This is what your test verifies. It's underspecified because it doesn't mention if modifications are allowed or not. If we say
The Service uses the EntityGenerator to produce Product instances, which cannot be modified.
Then we get a hint as to the test changes needed to capture the error:
var product = EntityGenerator.Product();
// [ Change ]
var originalOwner = product.Owner;
// assuming owner is an immutable value object, like String
// [...] - record other properties as well.
Product returnedProduct = _productService.GetProduct(product.Id);
Assert.AreEqual(product, returnedProduct);
// [ Change ] verify the product is equivalent to the original spec
Assert.AreEqual(originalOwner, returnedProduct.Owner);
// [...] - test other properties as well
(The change is that we retrieve the owner from the freshly created Product and check the owner from the Product returned from the service.)
This embodies the fact that the Owner and other product properties must equal the the original value from the generator. This may seem like I'm stating the obvious, since the code is pretty trivial, but it runs quite deep if you think in terms of requirement specifications.
I often "test my tests" by stipulating "if I change this line of code, tweak a critical constant or two, or inject a few code burps (e.g. changing != to ==), which test will capture the error?" Doing it for real finds if there is a test that captures the problem. Sometimes not, in which case it's time to look at the requirements implicit in the tests, and see how we can tighten them up. In projects with no real requirements capture/analysis this can be a useful tool to toughen up tests so they fail when unexpected changes occur.
Of course, you have to be pragmatic. You can't reasonably expect to handle all changes - some will simply be absurd and the program will crash. But logical changes like the Owner change are good candidates for test strengthening.
By dragging talk of requirements into a simple coding fix, some may think I've gone off the deep end, but thorough requirements help produce thorough tests, and if you have no requirements, then you need to work doubly hard to make sure your tests are thorough, since you're implicitly doing requirements capture as you write the tests.
EDIT: I'm answering this from within the contraints set in the question. Given a free choice, I would suggest not using the EntityGenerator to create Product test instances, and instead create them "by hand" and use an equality comparison. Or more direct, compare the fields of the returned Product to specific (hard-coded) values in the test, again, without using the EntityGenerator in the test.
Q1: Don't make changes to code then write a test. Write a test first for the expected behavior. Then you can do whatever you want to the SUT.
Q2: You don't make the changes in your Product Gateway to change the owner of the product. You make the change in your model.
But if you insist, then listen to your tests. They are telling you that you have the possibility for products to be pulled from the gateway that have the incorrect owners. Oops, Looks like a business rule. Should be tested for in the model.
Also your using a mock. Why are you testing an implementation detail? The gateway only cares that the _productRepository.GetProduct(id) returns a product. Not what the product is.
If you test in this manner you will be creating fragile tests. What if product changes further. Now you have failing tests all over the place.
Your consumers of product (MODEL) are the only ones that care about the implementation of Product.
So your gateway test should look like this:
[Test]
public void GetProduct_return_the_same_product_as_getProduct_on_productRepository() {
var product = EntityGenerator.Product();
_productRepositoryMock.Setup(pr => pr.GetProduct(product.Id)).Returns(product);
_productService.GetProduct(product.Id);
_productRepositoryMock.VerifyAll();
}
Don't put business logic where it doesn't belong! And it's corollary is don't test for business logic where there should be none.
If you really want to guarantee that the service method doesn't change the attributes of your products, you have two options:
Define the expected product attributes in your test and assert that the resulting product matches these values. (This appears to be what you're doing now by cloning the object.)
Mock the product and specify expectations to verify that the service method does not change its attributes.
This is how I'd do the latter with NMock:
// If you're not a purist, go ahead and verify all the attributes in a single
// test - Get_Product_Does_Not_Modify_The_Product_Returned_By_The_Repository
[Test]
public Get_Product_Does_Not_Modify_Owner() {
Product mockProduct = mockery.NewMock<Product>(MockStyle.Transparent);
Stub.On(_productRepositoryMock)
.Method("GetProduct")
.Will(Return.Value(mockProduct);
Expect.Never
.On(mockProduct)
.SetProperty("Owner");
_productService.GetProduct(0);
mockery.VerifyAllExpectationsHaveBeenMet();
}
My previous answer stands, though it assumes the members of the Product class that you care about are public and virtual. This is not likely if the class is a POCO / DTO.
What you're looking for might be rephrased as a way to do comparison of the values (not instance) of the object.
One way to compare to see if they match when serialized. I did this recently for some code... Was replacing a long parameter list with a parameterized object. The code is crufty, I don't want to refactor it though as its going away soon anyhow. So I just do this serialization comparison as a quick way to see if they have the same value.
I wrote some utility functions... Assert2.IsSameValue(expected,actual) which functions like NUnit's Assert.AreEqual(), except it serializes via JSON before comparing. Likewise, It2.IsSameSerialized() can be used to describe parameters passed to mocked calls in a manner similar to Moq.It.Is().
public class Assert2
{
public static void IsSameValue(object expectedValue, object actualValue) {
JavaScriptSerializer serializer = new JavaScriptSerializer();
var expectedJSON = serializer.Serialize(expectedValue);
var actualJSON = serializer.Serialize(actualValue);
Assert.AreEqual(expectedJSON, actualJSON);
}
}
public static class It2
{
public static T IsSameSerialized<T>(T expectedRecord) {
JavaScriptSerializer serializer = new JavaScriptSerializer();
string expectedJSON = serializer.Serialize(expectedRecord);
return Match<T>.Create(delegate(T actual) {
string actualJSON = serializer.Serialize(actual);
return expectedJSON == actualJSON;
});
}
}
Well, one way is to pass around a mock of product rather than the actual product. Verify nothing to affect the product by making it strict. (I assume you are using Moq, it looks like you are)
[Test]
public void GetProduct_return_the_same_product_as_getProduct_on_productRepository() {
var product = new Mock<EntityGenerator.Product>(MockBehavior.Strict);
_productRepositoryMock.Setup(pr => pr.GetProduct(product.Id)).Returns(product);
Product returnedProduct = _productService.GetProduct(product.Id);
Assert.AreEqual(product, returnedProduct);
_productRepositoryMock.VerifyAll();
product.VerifyAll();
}
That said, I'm not sure you should be doing this. The test is doing to much, and might indicate there is another requirement somewhere. Find that requirement and create a second test. It might be that you just want to stop yourself from doing something stupid. I don't think that scales, because there are so many stupid things you can do. Trying to test each would take too long.
I'm not sure, if the unit test should care about "what given method does not". There are zillion steps which are possible. In strict the test "GetProduct(id) return the same product as getProduct(id) on productRepository" is correct with or without the line product.Owner = "totallyDifferentOwner".
However you can create a test (if is required) "GetProduct(id) return product with same content as getProduct(id) on productRepository" where you can create a (propably deep) clone of one product instance and then you should compare contents of the two objects (so no object.Equals or object.ReferenceEquals).
The unit tests are not guarantee for 100% bug free and correct behaviour.
You can return an interface to product instead of a concrete Product.
Such as
public IProduct GetProduct(int id)
{
return _productRepository.GetProduct(id);
}
And then verify the Owner property was not set:
Dep<IProduct>().AssertWasNotCalled(p => p.Owner = Arg.Is.Anything);
If you care about all the properties and or methods, then there is probably a pre-existing way with Rhino. Otherwise you can make an extension method that probably uses reflection such as:
Dep<IProduct>().AssertNoPropertyOrMethodWasCalled()
Our behaviour specifications are like so:
[Specification]
public class When_product_service_has_get_product_called_with_any_id
: ProductServiceSpecification
{
private int _productId;
private IProduct _actualProduct;
[It]
public void Should_return_the_expected_product()
{
this._actualProduct.Should().Be.EqualTo(Dep<IProduct>());
}
[It]
public void Should_not_have_the_product_modified()
{
Dep<IProduct>().AssertWasNotCalled(p => p.Owner = Arg<string>.Is.Anything);
// or write your own extension method:
// Dep<IProduct>().AssertNoPropertyOrMethodWasCalled();
}
public override void GivenThat()
{
var randomGenerator = new RandomGenerator();
this._productId = randomGenerator.Generate<int>();
Stub<IProductRepository, IProduct>(r => r.GetProduct(this._productId));
}
public override void WhenIRun()
{
this._actualProduct = Sut.GetProduct(this._productId);
}
}
Enjoy.
If all consumers of ProductService.GetProduct() expect the same result as if they had asked it from the ProductRepository, why don't they just call ProductRepository.GetProduct() itself ?
It seems you have an unwanted Middle Man here.
There's not much value added to ProductService.GetProduct(). Dump it and have the client objects call ProductRepository.GetProduct() directly. Put the error handling and logging into ProductRepository.GetProduct() or the consumer code (possibly via AOP).
No more Middle Man, no more discrepancy problem, no more need to test for that discrepancy.
Let me state the problem as I see it.
You have a method and a test method. The test method validates the original method.
You change the system under test by altering the data. What you want to see is that the same unit test fails.
So in effect you're creating a test that verifies that the data in the data source matches the data in your fetched object AFTER the service layer returns it. That probably falls under the class of "integration test."
You don't have many good options in this case. Ultimately, you want to know that every property is the same as some passed-in property value. So you're forced to test each property independently. You could do this with reflection, but that won't work well for nested collections.
I think the real question is: why test your service model for the correctness of your data layer, and why write code in your service model just to break the test? Are you concerned that you or other users might set objects to invalid states in your service layer? In that case you should change your contract so that the Product.Owner is readonly.
You'd be better off writing a test against your data layer to ensure that it fetches data correctly, then use unit tests to check the business logic in your service layer. If you're interested in more details about this approach reply in the comments.
Having look on all 4 hints provided it seems that you want to make an object immutable at runtime. C# language does no support that. It is possible only with refactoring the Product class itself. For refactoring you can take IReadonlyProduct approach and protect all setters from being called. This however still allows modification of elements of containers like List<> being returned by getters. ReadOnly collection won't help either. Only WPF lets you change immutability at runtime with Freezable class.
So I see the only proper way to make sure objects have same contents is by comparing them. Probably the easiest way would be to add [Serializable] attribute to all involved entities and do the serialization-with-comparison as suggested by Frank Schwieterman.