I am wondering how to get around this. I am using nhibernate and fluent.
I have a domain class like this
public class User
{
public virtual int UserId {get; private set;}
}
this seems to be the convention when doing nhibernate as it stops people from setting and id as it is auto generated.
Now the problem comes when I am unit testing.
I have all my nhibernate code in a repo that I mock out so I am only testing my service layer. The problem comes when this happens.
User user = repo.GetUser(email);
this should return a user object.
So I want to use moq to do this
repo.Setup(x => x.GetUser(It.IsAny<string>())).Return(/* UserObject here */)
now here is the problem
I need to make that User object and put it in the Return part.
So I would do something like
User user = new User()
{
UserId = 10,
}
But this is where the problem lies I need to set the Id because I actually use it later on to do some linq on some collections(in the service layer as it is not hitting my db so it should not be in my repo) so I need to have it set but I can't set it because it is a private set.
So what should I do? Should I just remove the private or is there some other way?
You can have the fake Repository object return a fake User object:
var stubUser = new Mock<User>();
stubUser.Setup(s => s.UserId).Returns(10);
var stubRepo = new Mock<IUserRepository>();
stubRepo.Setup(s => s.GetUser(It.IsAny<string>())).Return(stubUser);
There are a couple of things to observe here:
Moq can only fake members of concrete classes if they are marked as virtual. This may not be applicable in some scenarios, in which case the only way to fake an object through Moq is having it implement an interface.In this case, however, the solution works nicely because NHibernate already imposes the same requirement on the properties of the User class in order to do lazy loading.
Having fake objects returning other fakes may sometimes lead to over specified unit tests. In these situations, the construction of rich object models made up of stubs and mocks grows to the point where it becomes difficult to determine what exactly is being tested, making the test itself unreadable and hard to maintain. It is a perfectly fine unit testing practice, to be clear, but it must be used consciously.
Related resources:
Over Specification in Tests
Enrico's answer is spot on for unit testing. I offer another solution because this problem crops up in other circumstances too, where you might not want to use Moq. I regularly use this technique in production code where the common usage pattern is for a class member to be read-only, but certain other classes need to modify it. One example might be a status field, which is normally read-only and should only be set by a state machine or business logic class.
Basically you provide access to the private member through a static nested class that contains a method to set the property. An example is worth a thousand words:
public class User {
public int Id { get; private set; }
public static class Reveal {
public static void SetId(User user, int id) {
user.Id = id;
}
}
}
You use it like this:
User user = new User();
User.Reveal.SetId(user, 43);
Of course, this then enables anyone to set the property value almost as easily as if you had provided a public setter. But there are some advantages with this technique:
no Intellisense prompting for the property setter or a SetId() method
programmers must explicitly use weird syntax to set the property with the Reveal class, thereby prompting them that they should probably not be doing this
you can easily perform static analysis on usages of the Reveal class to see what code is bypassing the standard access patterns
If you are only looking to modify a private property for unit testing purposes, and you are able to Moq the object, then I would still recommend Enrico's suggestion; but you might find this technique useful from time to time.
Another alternative if you prefer not to mock your entity classes is to set the private/protected ID using reflection.
Yes, I know that this is usually not looked upon very favourably, and often cited as a sign of poor design somewhere. But in this case, having a protected ID on your NHibernate entities is the standard paradigm, so it seems a quite reasonable solution.
We can try to implement it nicely at least. In my case, 95% of my entities all use a single Guid as the unique identifier, with just a few using an integer. Therefore our entity classes usually implement a very simple HasID interface:
public interface IHasID<T>
{
T ID { get; }
}
In an actual entity class, we might implement it like this:
public class User : IHasID<Guid>
{
Guid ID { get; protected set; }
}
This ID is mapped to NHibernate as a primary key in the usual manner.
To the setting of this in our unit tests, we can use this interface to provide a handy extension method:
public static T WithID<T, K>(this T o, K id) where T : class, IHasID<K>
{
if (o == null) return o;
o.GetType().InvokeMember("ID", BindingFlags.SetProperty | BindingFlags.Public | BindingFlags.NonPublic | BindingFlags.Instance, null, o, new object[] { id });
return o;
}
We don't have to have the HasID interface to do this, but it means we can skip a bit of extra code - for example we don't need to check if the ID is actually supported or not.
The extension method also returns the original object, so in usage I usually just chain it onto the end of the constructor:
var testUser = new User("Test User").WithID(new Guid("DC1BA89C-9DB2-48ac-8CE2-E61360970DF7"));
Or actually, since for Guids I don't care what the ID actually is, I have another extension method:
public static T WithNewGuid<T>(this T o) where T : class, IHasID<Guid>
{
if (o == null) return o;
o.GetType().InvokeMember("ID", BindingFlags.SetProperty | BindingFlags.Public | BindingFlags.NonPublic | BindingFlags.Instance, null, o, new object[] { Guid.NewGuid() });
return o;
}
And in usage:
var testUser = new User("Test User").WithNewGuid();
Instead of trying to mock out your repository, I'd suggest you try to use an in-memory SQLite database for testing. It will give you the speed you're looking for and it will make things a lot easier too. If you want to see a working sample you can have a look at one of my GitHub projects: https://github.com/dlidstrom/GridBook.
Related
I'm writing a permissions service for my app, and part of this service's responsibility is to check that a user has permission to access the particular object they are trying to change. There are around 6 six different objects that can be mutated, and they all possess a particular property called tenant. This tenant prop is what I need to check.
The issue is that I want to keep my code as DRY as possible, but I can't see anyway of not repeating myself in this particular situation. I have six different objects which I need to check, therefore I have six different IDs and six different calls to the database to retrieve the information I need.
I'm reluctant to write six different methods each supporting the different objects I need to check, but since the code is going to look something like the below (vastly simplified) I'm not sure if there's anything I can do differently.
public bool CheckUserHasPermissionForObject(string id)
{
var obj = _dataRepository.GetObjById(id);
var userHasPermission = UserHasPermission(obj);
return userHasPermission;
}
I was hoping delegate types would lend a hand here but I don't think they'll help either.
There are few options there.
Option 1: Using interfaces
You can create an interface class that has the property tenant:
// TODO: Rename this class
public interface IParentClass
{
string Tenant { get; set; }
}
Then derive all your six objects from that:
// TODO: Rename this class
public class ChildClass1 : IParentClass
{
public string Tenant { get; set; }
}
// TODO: Rename this class
public class ChildClass2 : IParentClass
{
public string Tenant { get; set; }
}
//... TODO: Derive the others as well
And then modify your method to check that property like this:
public bool CheckUserHasPermissionForObject(string id)
{
var obj = _dataRepository.GetObjById(id) as IParentClass;
var userHasPermission = UserHasPermission(obj);
return userHasPermission;
}
private bool UserHasPermission(IParentClass obj)
{
// TODO: Implement your check here
if (obj.Tenant == "Whatever")
{
// TODO: Implement your logic here
}
return false;
}
Option 2: Using reflections
You can get the value of the property called "tenant" of different objects with reflections like this:
var tenantValue = obj.GetType().GetProperty("tenant").GetValue(obj, null);
This will try to find a property called "tenant" in any object, and return the value.
P.S. Option 3 might be using some generics, but not sure, as the question is not that clear at this moment.
The issue is that I want to keep my code as DRY as possible, but I can't see anyway of not repeating myself in this particular situation. I have six different objects which I need to check, therefore I have six different IDs and six different calls to the database to retrieve the information I need.
If the logic for checking permissions is not the same, then by definition you aren't repeating yourself. Don't make your code arcane or unreadable all in the name of DRY.
Because you're making 6 distinct calls to the database, your options for reusing code are limited.
I'm reluctant to write six different methods each supporting the different objects I need to check.
If the objects have different ways to verify the permissions, there is no way around this. Either the objects are all the same (and can inherit some sort of shared logic), or they aren't. Objects that look similar but aren't actually the same should be kept separate.
My recommendation
In order to communicate similar functionality (but different implementation), I'd use an interface. Maybe something like
public interface IUserPermission
{
string Tenant { get; set; }
bool CheckUserHasPermissions(string id);
}
This interface makes the calling code more consistent and better communicates how the objects are meant to interact. Notably, this does not reduce the amount of code written. It just documents/explains the intention of the code.
Alternative solution
Ultimately, the code will need to be able to distinguish your different types of objects. But technically you could write one giant function that switches based on object type instead of splitting the logic across the six different objects. I personally find this organization hard to read and debug, but you could technically write some sort of utility (extension) method like this:
public static bool CheckUserHasPermissions(this object obj, string id)
{
if (obj is Type1)
return CallDatabase1(id);
if (obj is Type2)
return CallDatabase2(id);
throw new ArgumentException("Unsupported object type.", nameof(obj));
}
In DDD it is customary to protect an entity's properties like this:
public class Customer
{
private Customer() { }
public Customer(int id, string name) { /* ...populate properties... */ }
public int Id { get; private set; }
public string Name { get; private set; }
// and so on...
}
EF uses reflection so it can handle all those privates.
But what if you need to attach an entity without loading it (a very common thing to do):
var customer = new Customer { Id = getIdFromSomewhere() }; // can't do this!
myContext.Set<Customer>().Attach(customer);
This won't work because the Id setter is private.
What is a good way to deal with this mismatch between the language and DDD?
Ideas:
make Id public (and break DDD)
create a constructor/method to populate a dummy object (makes no sense)
use reflection ("cheat")
???
I think the best compromise, is to use reflection, and set that private Id property, just like EF does. Yes it's reflection and slow, but much faster than loading from the database. And yes it's cheating, but at least as far as the domain is concerned, there is officially no way to instantiate that entity without going through the constructor.
How do you handle this scenario?
PS I did a simple benchmark and it takes about 10s to create a million instances using reflection. So compared to hitting the database, or the reflection performed by EF, the extra overhead is tiny.
"customary" implicitly means it's not a hard set rule, so if you have specific reasons to break those rules in your application, go for it. Making the property setter public would be better than going into reflection for this: not only because of performance issues, but also because it makes it much easier to put unwanted side-effects in your application. Reflection just isn't the way to deal with this.
But I think the first question here is why you would want the ID of an object to be set from the outside in the first place. EF uses the ID primarily to identify objects and you should not use the ID for other logic in your application than just that.
Assuming you have a strong reason to want to change the ID, I actually think you gave the answer yourself in the source you just put in the comments:
So you would have methods to control what happens to your objects and
in doing so, constrain the properties so that they are not exposed to
be set or modified “willy nilly”.
You can keep the private setter and use a method to set the ID.
EDIT:
After reading this I tried doing some more testing myself and you could have the following:
public class Customer
{
private Customer() { }
public Customer(int id) { /* only sets id */ }
public Customer(int id, string name) { /* ...populate properties... */ }
public int Id { get; private set; }
public string Name { get; private set; }
// and so on...
public void SetName(string name)
{
//set name, perhaps check for condition first
}
}
public class MyController
{
//...
var customer = new Customer(getIdFromSomewhere());
myContext.Set<Customer>().Attach(customer);
order.setCustomer(customer);
myContext.SaveChanges(); //sets the customer to order and saves it, without actually changing customer: still read as unchanged.
//...
}
This code leaves the private setters as they were (you will need the methods for editing of course) and only the required changes are pushed to the db afterwards. As is also explained in the link above, only changes made after attaching are used and you should make sure you don't manually set the state of the object to modified, else all properties are pushed (potentially emptying your object).
This is what I'm doing, using reflection. I think it's the best bad option.
var customer = CreateInstanceFromPrivateConstructor<Customer>();
SetPrivateProperty(p=>p.ID, customer, 10);
myContext.Set<Customer>().Attach(customer);
//...and all the above was just for this:
order.setCustomer(customer);
myContext.SaveChanges();
The implementations of those two reflection methods aren't important. What is important:
EF uses reflection for lots of stuff
Database reads are much slower than these reflection calls (the benchmark I mentioned in the question shows how insignificant this perf hit is, about 10s to create a million instances)
Domain is fully DDD - you can't create an entity in a weird state, or create one without going through the constructor (I did that above but I cheated for a specific case, just like EF does)
I am currently in the process of adding CodeContracts to my existing code base.
One thing that proves difficult is the usage of entities that are hydrated by NHibernate.
Assume this simple class:
public class Post
{
private Blog _blog;
[Obsolete("Required by NHibernate")]
protected Post() { }
public Post(Blog blog)
{
Contract.Requires(blog != null);
_blog = blog;
}
public Blog Blog
{
get
{
Contract.Ensures(Contract.Result<Blog>() != null);
return _blog;
}
set
{
Contract.Requires(value != null);
_blog = value;
}
}
[ContractInvariantMethod]
private void Invariants()
{
Contract.Invariant(_blog != null);
}
}
This class tries to protect the invariant _blog != null. However, it currently fails, because I easily could create an instance of Post by deriving from it and using the protected constructor. In that case _blog would be null.
I am trying to change my code-base in a way that the invariants are indeed protected.
The protected constructor is at first sight needed by NHibernate to be able to create new instances, but there is a way around this requirement.
That approach basically uses FormatterServices.GetUninitializedObject. The important point is, that this method doesn't run any constructors.
I could use this approach and it would allow me to get rid of the protected constructor. The static checker of CodeContracts would now be happy and not report any more violations, but as soon as NHibernate tries to hydrate such entities it will generate "invariant failed" exceptions, because it tries to set one property after the other and every property setter executes code that verifies the invariants.
So, to make all this work, I will have to ensure that the entities are instantiated via their public constructor.
But how would I do this?
Daniel, if I'm not mistaken (it's been a while since I worked with NH) you can have a private constructor and he still should be fine creating your object.
Aside from that, why do you need to be a 100% sure? Is it a requirement in some way or you are just trying to covering all the bases?
I ask that because depending on the requirement we could come with another way of achieving it.
What you COULD do right now to provide that extra protection is wire up an IInterceptor class to make sure that after the load your class is still valid.
I guess that the bottom line is if someone want's to mess up with your domain and classes they WILL do it no matter what you do. The effort to prevent all that stuff doesn't pay off in most cases.
Edit after clarification
If you use your objects to write to the database and you contracts are working you can safely assume that the data will be written correctly and therefore loaded correctly if no one tampers with the database.
If you do change the database manually you should either stop doing it and use your domain to do that (that's where the validation logic is) or test the database changing process.
Still, if you really need that you can still hook up a IInterceptor that will validate your entity after the load, but I don't think you fix a water flooding coming from the street by making sure your house pipe is fine.
Based on the discussion with tucaz, I came up with the following, in its core rather simple solution:
The heart of this solution is the class NHibernateActivator. It has two important purposes:
Create an instance of an object without invoking its constructors. It uses FormatterServices.GetUninitializedObject for this.
Prevent the triggering of "invariant failed" exceptions while NHibernate hydrates the instance. This is a two-step task: Disable invariant checking before NHibernate starts hydrating and re-enable invariant checking after NHibernate is done.
The first part can be performed directly after the instance has been created.
The second part is using the interface IPostLoadEventListener.
The class itself is pretty simple:
public class NHibernateActivator : INHibernateActivator, IPostLoadEventListener
{
public bool CanInstantiate(Type type)
{
return !type.IsAbstract && !type.IsInterface &&
!type.IsGenericTypeDefinition && !type.IsSealed;
}
public object Instantiate(Type type)
{
var instance = FormatterServices.GetUninitializedObject(type);
instance.DisableInvariantEvaluation();
return instance;
}
public void OnPostLoad(PostLoadEvent #event)
{
if (#event != null && #event.Entity != null)
#event.Entity.EnableInvariantEvaluation(true);
}
}
DisableInvariantEvaluation and EnableInvariantEvaluation are currently extension methods that use reflection to set a protected field. This field prevents invariants from being checked. Furthermore EnableInvariantEvaluation will execute the method that checks the invariants if it gets passed true:
public static class CodeContractsExtensions
{
public static void DisableInvariantEvaluation(this object entity)
{
var evaluatingInvariantField = entity.GetType()
.GetField(
"$evaluatingInvariant$",
BindingFlags.NonPublic |
BindingFlags.Instance);
if (evaluatingInvariantField == null)
return;
evaluatingInvariantField.SetValue(entity, true);
}
public static void EnableInvariantEvaluation(this object entity,
bool evaluateNow)
{
var evaluatingInvariantField = entity.GetType()
.GetField(
"$evaluatingInvariant$",
BindingFlags.NonPublic |
BindingFlags.Instance);
if (evaluatingInvariantField == null)
return;
evaluatingInvariantField.SetValue(entity, false);
if (!evaluateNow)
return;
var invariantMethod = entity.GetType()
.GetMethod("$InvariantMethod$",
BindingFlags.NonPublic |
BindingFlags.Instance);
if (invariantMethod == null)
return;
invariantMethod.Invoke(entity, new object[0]);
}
}
The rest is NHibernate plumbing:
We need to implement an interceptor that uses our activator.
We need to implement an reflection optimizer that returns our implementation of IInstantiationOptimizer. This implementation in turn again uses our activator.
We need to implement a proxy factory that uses our activator.
We need to implement IProxyFactoryFactory to return our custom proxy factory.
We need to create a custom proxy validator that doesn't care whether the type has a default constructor.
We need to implement a bytecode provider that returns our reflection optimizer and proxy-factory factory.
NHibernateActivator needs to be registered as a listener using config.AppendListeners(ListenerType.PostLoad, ...); in ExposeConfiguration of Fluent NHibernate.
Our custom bytecode provider needs to be registered using Environment.BytecodeProvider.
Our custom interceptor needs to be registered using config.Interceptor = ...;.
I will update this answer when I had the chance to create a coherent package out of all this and put it on github.
Furthermore, I want to get rid of the reflection and create a proxy type instead that can directly access the protected CodeContract members.
For reference, the following blog posts where helpful in implementing the several NHibernate interfaces:
http://weblogs.asp.net/ricardoperes/archive/2012/06/19/implementing-an-interceptor-using-nhibernate-s-built-in-dynamic-proxy-generator.aspx
http://kozmic.net/2011/03/20/working-with-nhibernate-without-default-constructors/
Unfortunately, this currently fails for entities with composite keys, because the reflection optimizer is not used for them. This is actually a bug in NHibernate and I reported it here.
I have two tables in the database that are used almost for the same thing, but the tables don't have exactly the same structure.
Lets say I have one table for manual requests and another table for automatic requests. I have to load both tables into the same GridView and I'm using custom business objects.
To illustrate the question I'll call TManualReqTable and TAutomaticReqTable.
TManualReqTable
- ID
- Field1
- Field2
- Field3
- Field4
and
TAutomaticReqTable
- ID
- Field1
- Field3
In the code, I'm using the same object for these two tables. I have an interface with all the properties of both tables and I'm checking if the field exists when I'm loading the data to the object.
But I'm thinking this should be created with two objects and one superclass with abstracts methods.
What is your opinion about it?
I would create an interface IRequest that describes the fields & methods common to both, and then interfaces & classes for ManualRequest and AutomaticRequest that implement IRequest and also add the methods/fields unique to each of them.
You can use IRequest as the type for something that incorporates either one. When iterating through something that can include data from either, you can check whether each object implements the interfaces:
foreach (IRequest obj in RequestList) {
// do stuff that uses the common interface
if (obj is IManualRequest) {
// do stuff specific to manual requests
} else if (obj is IAutomaticRequest) {
// likewise
}
}
I follow a general rule to avoid creating base classes unless:
I've already designed or discovered sufficient commonality to give sufficient substance to the base class.
I have a use case for consuming the classes as the base class; if I don't have anything that can operate on the common functionality of the classes, there's little value in having a base class (can achieve the same functionality through composition of a class implementing the common behaviors.)
The requirements are sufficiently stable that I believe the base class abstraction will hold without significant modification in the future. Base classes become increasingly difficult to modify over time.
IMO, forget how the database looks like for a minute or two.
Think of how it should be structured as an object.
Think of how you would like to use that object. If you need to visualize, write some code of that yet non-existing object and tweak it until it looks elegant.
Think of how to make it happen.
model first development
Hope it helps.
well, there are a few assumptions i'm making here, so let me make them explicit...
given:
this is primarily a difference in query/display logic
the display logic can already handle the nulls
the underlying object being represented is the same between the two items
there's a simple way of determining whether this was a 'manual' or an 'automatic' call
i would say that inheritance is not the way i would model it. why? because it's the same object, not two different kinds of object. you're basically just not displaying a couple of the fields, and therefore do not need to query them.
so, i would probably try to accomplish something that makes clear the nature of the difference between the two (keep in mind that i intend this to show a way of organizing it so that it's clear, any particular implementation might have different needs; the main idea to glean is treating the differences as what they are: differences in what gets queried based upon some sort of condition.
public enum EQueryMode
{
Manual,
Automatic
}
public class FieldSpecification
{
public string FieldName { get; set; }
public bool[] QueryInMode { get; set; }
public FieldSpecification
(
string parFieldName,
bool parQueryInManual,
bool parQueryInAutomatic
)
{
FieldName = parFieldName;
QueryInMode = new bool[] { parQueryInManual, parQueryInAutomatic };
}
}
public class SomeKindOfRecord
{
public List<FieldSpecification> FieldInfo =
new List<FieldSpecification>()
{
new FieldSpecification("Field1", true, true),
new FieldSpecification("Field2", true, false),
new FieldSpecification("Field3", true, true),
new FieldSpecification("Field4", true, false)
};
// ...
public void PerformQuery(EQueryMode QueryMode)
{
List<string> FieldsToSelect =
(
from f
in FieldInfo
where
f.QueryInMode[(int)QueryMode]
select
f.FieldName
)
.ToList();
Fetch(FieldsToSelect);
}
private void Fetch(List<string> Fields)
{
// SQL (or whatever) here
}
}
edit: wow i can't seem to make a post today without having to correct my grammar! ;)
Consider a following chunk of service:
public class ProductService : IProductService {
private IProductRepository _productRepository;
// Some initlization stuff
public Product GetProduct(int id) {
try {
return _productRepository.GetProduct(id);
} catch (Exception e) {
// log, wrap then throw
}
}
}
Let's consider a simple unit test:
[Test]
public void GetProduct_return_the_same_product_as_getProduct_on_productRepository() {
var product = EntityGenerator.Product();
_productRepositoryMock.Setup(pr => pr.GetProduct(product.Id)).Returns(product);
Product returnedProduct = _productService.GetProduct(product.Id);
Assert.AreEqual(product, returnedProduct);
_productRepositoryMock.VerifyAll();
}
At first it seems that this test is ok. But let's change our service method a little bit:
public Product GetProduct(int id) {
try {
var product = _productRepository.GetProduct(id);
product.Owner = "totallyDifferentOwner";
return product;
} catch (Exception e) {
// log, wrap then throw
}
}
How to rewrite a given test that it'd pass with the first service method and fail with a second one?
How do you handle this kind of simple scenarios?
HINT 1: A given test is bad coz product and returnedProduct is actually the same reference.
HINT 2: Implementing equality members (object.equals) is not the solution.
HINT 3: As for now, I create a clone of the Product instance (expectedProduct) with AutoMapper - but I don't like this solution.
HINT 4: I'm not testing that the SUT does NOT do sth. I'm trying to test that SUT DOES return the same object as it is returned from repository.
Personally, I wouldn't care about this. The test should make sure that the code is doing what you intend. It's very hard to test what code is not doing, I wouldn't bother in this case.
The test actually should just look like this:
[Test]
public void GetProduct_GetsProductFromRepository()
{
var product = EntityGenerator.Product();
_productRepositoryMock
.Setup(pr => pr.GetProduct(product.Id))
.Returns(product);
Product returnedProduct = _productService.GetProduct(product.Id);
Assert.AreSame(product, returnedProduct);
}
I mean, it's one line of code you are testing.
Why don't you mock the product as well as the productRepository?
If you mock the product using a strict mock, you will get a failure when the repository touches your product.
If this is a completely ridiculous idea, can you please explain why? Honestly, I'd like to learn.
One way of thinking of unit tests is as coded specifications. When you use the EntityGenerator to produce instances both for the Test and for the actual service, your test can be seen to express the requirement
The Service uses the EntityGenerator to produce Product instances.
This is what your test verifies. It's underspecified because it doesn't mention if modifications are allowed or not. If we say
The Service uses the EntityGenerator to produce Product instances, which cannot be modified.
Then we get a hint as to the test changes needed to capture the error:
var product = EntityGenerator.Product();
// [ Change ]
var originalOwner = product.Owner;
// assuming owner is an immutable value object, like String
// [...] - record other properties as well.
Product returnedProduct = _productService.GetProduct(product.Id);
Assert.AreEqual(product, returnedProduct);
// [ Change ] verify the product is equivalent to the original spec
Assert.AreEqual(originalOwner, returnedProduct.Owner);
// [...] - test other properties as well
(The change is that we retrieve the owner from the freshly created Product and check the owner from the Product returned from the service.)
This embodies the fact that the Owner and other product properties must equal the the original value from the generator. This may seem like I'm stating the obvious, since the code is pretty trivial, but it runs quite deep if you think in terms of requirement specifications.
I often "test my tests" by stipulating "if I change this line of code, tweak a critical constant or two, or inject a few code burps (e.g. changing != to ==), which test will capture the error?" Doing it for real finds if there is a test that captures the problem. Sometimes not, in which case it's time to look at the requirements implicit in the tests, and see how we can tighten them up. In projects with no real requirements capture/analysis this can be a useful tool to toughen up tests so they fail when unexpected changes occur.
Of course, you have to be pragmatic. You can't reasonably expect to handle all changes - some will simply be absurd and the program will crash. But logical changes like the Owner change are good candidates for test strengthening.
By dragging talk of requirements into a simple coding fix, some may think I've gone off the deep end, but thorough requirements help produce thorough tests, and if you have no requirements, then you need to work doubly hard to make sure your tests are thorough, since you're implicitly doing requirements capture as you write the tests.
EDIT: I'm answering this from within the contraints set in the question. Given a free choice, I would suggest not using the EntityGenerator to create Product test instances, and instead create them "by hand" and use an equality comparison. Or more direct, compare the fields of the returned Product to specific (hard-coded) values in the test, again, without using the EntityGenerator in the test.
Q1: Don't make changes to code then write a test. Write a test first for the expected behavior. Then you can do whatever you want to the SUT.
Q2: You don't make the changes in your Product Gateway to change the owner of the product. You make the change in your model.
But if you insist, then listen to your tests. They are telling you that you have the possibility for products to be pulled from the gateway that have the incorrect owners. Oops, Looks like a business rule. Should be tested for in the model.
Also your using a mock. Why are you testing an implementation detail? The gateway only cares that the _productRepository.GetProduct(id) returns a product. Not what the product is.
If you test in this manner you will be creating fragile tests. What if product changes further. Now you have failing tests all over the place.
Your consumers of product (MODEL) are the only ones that care about the implementation of Product.
So your gateway test should look like this:
[Test]
public void GetProduct_return_the_same_product_as_getProduct_on_productRepository() {
var product = EntityGenerator.Product();
_productRepositoryMock.Setup(pr => pr.GetProduct(product.Id)).Returns(product);
_productService.GetProduct(product.Id);
_productRepositoryMock.VerifyAll();
}
Don't put business logic where it doesn't belong! And it's corollary is don't test for business logic where there should be none.
If you really want to guarantee that the service method doesn't change the attributes of your products, you have two options:
Define the expected product attributes in your test and assert that the resulting product matches these values. (This appears to be what you're doing now by cloning the object.)
Mock the product and specify expectations to verify that the service method does not change its attributes.
This is how I'd do the latter with NMock:
// If you're not a purist, go ahead and verify all the attributes in a single
// test - Get_Product_Does_Not_Modify_The_Product_Returned_By_The_Repository
[Test]
public Get_Product_Does_Not_Modify_Owner() {
Product mockProduct = mockery.NewMock<Product>(MockStyle.Transparent);
Stub.On(_productRepositoryMock)
.Method("GetProduct")
.Will(Return.Value(mockProduct);
Expect.Never
.On(mockProduct)
.SetProperty("Owner");
_productService.GetProduct(0);
mockery.VerifyAllExpectationsHaveBeenMet();
}
My previous answer stands, though it assumes the members of the Product class that you care about are public and virtual. This is not likely if the class is a POCO / DTO.
What you're looking for might be rephrased as a way to do comparison of the values (not instance) of the object.
One way to compare to see if they match when serialized. I did this recently for some code... Was replacing a long parameter list with a parameterized object. The code is crufty, I don't want to refactor it though as its going away soon anyhow. So I just do this serialization comparison as a quick way to see if they have the same value.
I wrote some utility functions... Assert2.IsSameValue(expected,actual) which functions like NUnit's Assert.AreEqual(), except it serializes via JSON before comparing. Likewise, It2.IsSameSerialized() can be used to describe parameters passed to mocked calls in a manner similar to Moq.It.Is().
public class Assert2
{
public static void IsSameValue(object expectedValue, object actualValue) {
JavaScriptSerializer serializer = new JavaScriptSerializer();
var expectedJSON = serializer.Serialize(expectedValue);
var actualJSON = serializer.Serialize(actualValue);
Assert.AreEqual(expectedJSON, actualJSON);
}
}
public static class It2
{
public static T IsSameSerialized<T>(T expectedRecord) {
JavaScriptSerializer serializer = new JavaScriptSerializer();
string expectedJSON = serializer.Serialize(expectedRecord);
return Match<T>.Create(delegate(T actual) {
string actualJSON = serializer.Serialize(actual);
return expectedJSON == actualJSON;
});
}
}
Well, one way is to pass around a mock of product rather than the actual product. Verify nothing to affect the product by making it strict. (I assume you are using Moq, it looks like you are)
[Test]
public void GetProduct_return_the_same_product_as_getProduct_on_productRepository() {
var product = new Mock<EntityGenerator.Product>(MockBehavior.Strict);
_productRepositoryMock.Setup(pr => pr.GetProduct(product.Id)).Returns(product);
Product returnedProduct = _productService.GetProduct(product.Id);
Assert.AreEqual(product, returnedProduct);
_productRepositoryMock.VerifyAll();
product.VerifyAll();
}
That said, I'm not sure you should be doing this. The test is doing to much, and might indicate there is another requirement somewhere. Find that requirement and create a second test. It might be that you just want to stop yourself from doing something stupid. I don't think that scales, because there are so many stupid things you can do. Trying to test each would take too long.
I'm not sure, if the unit test should care about "what given method does not". There are zillion steps which are possible. In strict the test "GetProduct(id) return the same product as getProduct(id) on productRepository" is correct with or without the line product.Owner = "totallyDifferentOwner".
However you can create a test (if is required) "GetProduct(id) return product with same content as getProduct(id) on productRepository" where you can create a (propably deep) clone of one product instance and then you should compare contents of the two objects (so no object.Equals or object.ReferenceEquals).
The unit tests are not guarantee for 100% bug free and correct behaviour.
You can return an interface to product instead of a concrete Product.
Such as
public IProduct GetProduct(int id)
{
return _productRepository.GetProduct(id);
}
And then verify the Owner property was not set:
Dep<IProduct>().AssertWasNotCalled(p => p.Owner = Arg.Is.Anything);
If you care about all the properties and or methods, then there is probably a pre-existing way with Rhino. Otherwise you can make an extension method that probably uses reflection such as:
Dep<IProduct>().AssertNoPropertyOrMethodWasCalled()
Our behaviour specifications are like so:
[Specification]
public class When_product_service_has_get_product_called_with_any_id
: ProductServiceSpecification
{
private int _productId;
private IProduct _actualProduct;
[It]
public void Should_return_the_expected_product()
{
this._actualProduct.Should().Be.EqualTo(Dep<IProduct>());
}
[It]
public void Should_not_have_the_product_modified()
{
Dep<IProduct>().AssertWasNotCalled(p => p.Owner = Arg<string>.Is.Anything);
// or write your own extension method:
// Dep<IProduct>().AssertNoPropertyOrMethodWasCalled();
}
public override void GivenThat()
{
var randomGenerator = new RandomGenerator();
this._productId = randomGenerator.Generate<int>();
Stub<IProductRepository, IProduct>(r => r.GetProduct(this._productId));
}
public override void WhenIRun()
{
this._actualProduct = Sut.GetProduct(this._productId);
}
}
Enjoy.
If all consumers of ProductService.GetProduct() expect the same result as if they had asked it from the ProductRepository, why don't they just call ProductRepository.GetProduct() itself ?
It seems you have an unwanted Middle Man here.
There's not much value added to ProductService.GetProduct(). Dump it and have the client objects call ProductRepository.GetProduct() directly. Put the error handling and logging into ProductRepository.GetProduct() or the consumer code (possibly via AOP).
No more Middle Man, no more discrepancy problem, no more need to test for that discrepancy.
Let me state the problem as I see it.
You have a method and a test method. The test method validates the original method.
You change the system under test by altering the data. What you want to see is that the same unit test fails.
So in effect you're creating a test that verifies that the data in the data source matches the data in your fetched object AFTER the service layer returns it. That probably falls under the class of "integration test."
You don't have many good options in this case. Ultimately, you want to know that every property is the same as some passed-in property value. So you're forced to test each property independently. You could do this with reflection, but that won't work well for nested collections.
I think the real question is: why test your service model for the correctness of your data layer, and why write code in your service model just to break the test? Are you concerned that you or other users might set objects to invalid states in your service layer? In that case you should change your contract so that the Product.Owner is readonly.
You'd be better off writing a test against your data layer to ensure that it fetches data correctly, then use unit tests to check the business logic in your service layer. If you're interested in more details about this approach reply in the comments.
Having look on all 4 hints provided it seems that you want to make an object immutable at runtime. C# language does no support that. It is possible only with refactoring the Product class itself. For refactoring you can take IReadonlyProduct approach and protect all setters from being called. This however still allows modification of elements of containers like List<> being returned by getters. ReadOnly collection won't help either. Only WPF lets you change immutability at runtime with Freezable class.
So I see the only proper way to make sure objects have same contents is by comparing them. Probably the easiest way would be to add [Serializable] attribute to all involved entities and do the serialization-with-comparison as suggested by Frank Schwieterman.