I wanna test the controller action, but one point is not coveraged by visual studio Code coverage tool.
public ActionResult Activate(int? id)
{
if (id == null)
return View("PageNotFound");
var city = repository.GetCityById(id.Value);
if (city == null)
return View("PageNotFound");
city.IsActive = !city.IsActive;
if (TryUpdateModel(city))
{
repository.Save();
return RedirectToAction("MyCities");
}
***return View("PageNotFound");***
}
in the code coverage, *return View("PageNotFound");* is not coveraged.
Because, I can not simulate the TryUpdateModel false stuation. TryUpdateModel can get false if model can not updated. Can u help about this?
TryUpdateModel will return false if the model validation fails.
(in which case you should not be showing a not found page)
In situations like these there are all kinds of options for you. One of them is creating a stub that overrides the actual functionality of the method you want to control.
For example, you can declare TryUpdateModel as virtual. In your unit test, instead of working with the original class, you inherit it and override the TryGetModel to simply return false. All other functionality is retained as is.
Now you call the Activate method on the derived class that has the exact same functionality apart from the simulated TryUpdateModel method, allowing you to test the desired use case without breaking your head as how to simulate a certain execution path.
This technique is not without drawbacks: it makes you declare the method as virtual only for testing purposes, thus preventing you from making it sealed, or static. There are other, more advanced techniques (Mock objects, isolation frameworks) but I think that this is a good enough solution for this scenario.
You don't include code to show how the lifetime of the cargo dependency is handled, so I'm only guessing, but...
It should be possible to pass a mock or fake instance of this class to the controller as part of your test setup (if it isn't currently possible, refactor until it is).
With the mock in place you can isolate and test the behaviour as you want.
Aside: I disagree with the comment from #dreza... testing this sort of business logic is very important.
I would also caution against faking the implementation of TryUpdateModel as suggested by #Vitaliy, after all you want to test what happens if the cargo model is invalid (not just pretend that it is by providing a new version of core code).
OK,
I solved solution. I deleted the TryUpdatemodel if condition. Directly used repository.Save() method. Because I did not send parameter a model to activate method.
Related
I have an existing ASP.NET MVC app and wanted to create some unit tests and I quickly ran into the problem below. Is there some sort of way to use MOQ and say 'When the private method GETCLIENTIP is run then return 'xxx'')?
Since right now it is uses parts of HttpContext that of course the unit test does not have.
public HttpResponseMessage Post([FromBody]TriageCase TriageCase)
{
if (ModelState.IsValid)
{
//Get the IP address from the request
TriageCase.ipAddress = this.GetClientIP(Request);
_log.Info("IP Address = " + TriageCase.ipAddress);
}
}
public void Verify_Not_A_Suicide()
{
TriageCaseRepository repository = new TriageCaseRepository();
var controller = new TriageCasesController(repository);
//This will not work because I must mock a private method in the controller?
HttpResponseMessage result = controller.Post(new TriageCase());
}
There are several ways to do this, right from nasty complicated ones to simple traded-off ones. here are some:
you could make the GetClientIP method agnostic of HttpContext. and then make it internal. mark the controller assembly with InternalsVisibleTo and put the unit test assembly path.
making the method agnostic of HttpContext saves you from having a HttpContextBase (the abstract http context class from 3.5 onwards for to enable testing) and provide mocking etc. (btw, you should think about it. especially for MVC) pass the specific string as the parameter to the method. e.g. The specific server variable string.
you could make the GetClientIP method receive HttpContextBase as the parameter and then make it internal. mark the controller assembly with InternalsVisibleTo and put the unit test assembly path.
in your controller action, you need to call this method as this.GetClientIP(new HttpContextWrapper(HttpContext.Current))
your unit tests can set mockable context. and whats more, you can set expectations on the context as to if the Request property was called, or if the ip address related server variable call was made. (not to mention the straight ip address value verification)
you could use FakeItEasy or Microsoft Moles etc. to create private accessors for private methods. i normally refrain from that.
you could write an interface base INetworkUtility which has a method to give you the IP Address. your controller could use this interface. and it could be tested in isolation as well.
you could have a public helper class to get the ip address, which can be unit tested.
as you can see, every solution has some trade-off you need to do.
getting IP Address from the Request object is an isolated piece of logic irrespective of mvc or web api or asp web forms. (still web specific though) so it doesn't harm to have it as helper methods or interface based helper methods.
personally, i prefer the Internal approach, (since it is almost private) and doesn't need much code change.
In proper TDD fashion you don't Unit Test "private" or "internal" methods. Only the public interface is used within Unit Tests. If you Unit Test a private/internal method then you are tying your unit tests too tightly to that specific implementation.
What should be done instead is use Dependency Injection to inject a class/interface that implements the "dependent" functionality that you are needing to unit test. This will help further modularize your code thus making it easier to maintain.
I general don't try to test private methods. It just gets messy - test the input and output of an action...
Either do the Dependency Injection thing or think about a "test double" or "accessor" class...
For a test double - make the private method a protected and then you inherit the controller and manually mock out inputs or outputs as required. I am not stating whether this is better or worse than Ioc, I am saying this is another way to do it.
protected virtual string ExecuteIpnResponse(string url)
{
var ipnClient = new WebClient();
var ipnResponse = ipnClient.DownloadString(url);
return ipnResponse;
}
I did a post on this testing style recently for checking a paypal call.
http://viridissoftware.wordpress.com/2014/07/29/paypal-ipn-payment-notification-example-in-c/
In our MVC project we are attempting to make everything as generic as possible.
Because of this we want to have one authentication class/method which covers all our methods.
As a example: The following code is a MVC class which can be called to from a client
public class Test
{
public void Test()
{
}
public int Test2(int i)
{
return i
}
public void Test3(string i)
{
}
}
A customer of our webservice can use a service reference to get access to Test(), Test2() and Test3().
Now i'm searching for a class, model, interface or anything else which I can use to alter the access to the method (Currently using [PrincipalPermission] attribute) as well as alter the parameter value.
Example:
Customer A calls Test2(150)
The class/method checks whether Customer A has access to Test2. The class/method validates the user but notices that the user does not have access to 150. He only has access to 100.So the class/method sets the parameter to 100 and lets it follow through on it's journey.
Customber B class Test()
The class/method checks whether Customer B has access to Test. After validation it shows that the user does not have access so it throws a SecurityException.
My question:
In what class, interface, attribute or whatever can I best do this?
(ps. As example i've only used authentication and parameter handling, but we plan to do a lot more in this stage.)
Edit
I notice most, if not all, assume I'm using actionResults. So i'd like to state that this is used in a webservice where we provide our customers with information from our database. In no way will we come in contact with a ActionResult during the requests to our webservice. (Atleast, not our customers)
Authentication can also be done through an aspect. The aspect oriented paradigm is designed to honor those so-called cross-cutting concerns. Cross-cutting concerns implemented in the "old-fashioned" oo-way make your business logic harder to read (like in Nick's example above) or even worse to understand, because they don't bring any "direct" benefit to your code:
public ActionResult YourAction(int id) {
if (!CustomerCanAccess(id)) {
return new HttpUnauthorizedResult();
}
/* the rest of your code */
}
The only thing you want here is /* the rest of your code */ and nothing more.
Stuff like logging, exception handling, caching and authorization for example could be implemented as an aspect and thus be maintained at one single point.
PostSharp is an example for an aspect-oriented C# framework. With PostSharp you could create a custom aspect and then annotate your method (like you did with the PrincipalPermissionAttribute). PostSharp will then weave your aspect code into your code during compilation. With the use of PostSharp aspects it would be possible to hook into the method invocation authenticating the calling user, changing method parameters or throw custom exceptions (See this blog post for a brief explanation how this is implemented).
There isn't a built-in attribute that handles this scenario.
I find it's usually best to just do something like this:
public ActionResult YourAction(int id) {
if (!CustomerCanAccess(id)) {
return new HttpUnauthorizedResult();
}
/* the rest of your code */
}
This is as simple as it gets and easy to extend. I think you'll find that in many cases this is all you need. It also keeps your security assertions testable. You can write a unit test that simply calls the method (without any MVC plumbing), and checks whether the caller was authorized or not.
Note that if you are using ASP.Net Forms Authentication, you may also need to add:
Response.SuppressFormsAuthenticationRedirect = true;
if you don't want your users to be redirected to the login page when they attempt to access a resource for which they are not authorized.
Here's how I've made my life simpler.
Never use simple values for action arguments. Always create a class that represents the action arguments. Even if there's only one value. I've found that I usually end up being able to re-use this class.
Make sure that all of teh properties of this class are nullable (this keeps you from running into default values (0 for integers) being automatically filles out) and thatallowable ranges are defined (this makes sure you don't worry about negative numbers)
Once you have a class that represents your arguments, throwing a validator onto a property ends up being trivial.
The thing is that you're not passing a meaningless int. It has a purpose, it could be a product number, an account number, etc. Create a class that has that as a property (e.g An AccountIdentifier class with a single field called 'id). Then all you have to do is create a [CurrentUsedCanAccessAccountId] attribute and place it on that property.
All your controller has to do is check whether or not ModelState.IsValid and you're done.
There are more elegant solutions out there, such as adding an action filter to the methods that would automatically re-direct based on whether or not the user has access to a specific value for the parameter, but this will work rather well
First, just to say it, that your own methods are probably the most appropriate place to handle input values (adjust/discard) - and with the addition of Authorize and custom filter actions you can get most done, and the 'MVC way'. You could also go the 'OO way' and have your ITest interface, dispatcher etc. (you get more compiler support - but it's more coupled). However, let's just presume that you need something more complex...
I'm also assuming that your Test is a controller - and even if it isn't it can be made part of the 'pipeline' (or by mimicking what MVC does), And with MVC in mind...
One obvious solution would be to apply filters, or action filters via
ActionFilterAttribute
Class
(like Authorize etc.) - by creating your own custom attribute and
overriding OnActionExecuting etc.
And while that is fine, it's not going to help much with parameters manipulation as you'd have to specify the code 'out of place' - or somehow inject delegates, lambda expressions for each attribute.
It is basically an interceptor of some sort that you need - which allows you to attach your own processing. I've done something similar - but this guy did a great job explaining and implementing a solution - so instead of me repeating most of that I'd suggest just to read through that.
ASP.NET MVC controller action with Interceptor pattern (by Amar, I think)
What that does is to use existing MVC mechanisms for filters - but it exposes it via a different 'interface' - and I think it's much easier dealing with inputs. Basically, what you'd do is something like...
[ActionInterceptor(InterceptionOrder.Before, typeof(TestController), "Test1")]
public void OnTest1(InterceptorParasDictionary<string, object> paras, object result)
The parameters and changes are propagated, you have a context of a sort so you can terminate further execution - or let both methods do their work etc.
What's also interesting - is the whole pattern - which is IOC of a
sort - you define the intercepting code in another class/controller
all together - so instead of 'decorating' your own Test methods -
attributes and most of the work are placed outside.
And to change your parameters you'd do something like...
// I'd create/wrap my own User and make this w/ more support interfaces etc.
if (paras.Count > 0 && Context.User...)
{
(paras["id"] as int) = 100;
}
And I'm guessing you could further change the implementation for your own case at hand.
That's just a rough design - I don't know if the code there is ready for production (it's for MVC3 but things are similar if not the same), but it's simplistic enough (when explained) and should work fine with some minor adjustments on your side.
I'm not sure if I understood your question, but it looks like a model binder can help.
Your model binder can have an interface injected that is responsible for determining if a user has permissions or not to a method, and in case it is needed it can change the value provided as a parameter.
ValueProviders, that implement the interface IValueProvider, may also be helpful in your case.
I believe the reason you haven't gotten ay good enough answer is because there are a few ambiguities in your question.
First, you say you have an MVC class that is called from a client and yet you say there are no ActionResults. So you would do well to clarify if you are using asp.net mvc framework, web api, wcf service or soap (asmx) web service.
If my assumption is right and you are using asp.net mvc framework, how are you defining web services without using action results and how does your client 'call' this service.
I am not saying it is impossible or that what you may have done is wrong, but a bit more clarity (and code) would help.
My advice if you are using asp.net mvc3 would be to design it so that you use controllers and actions to create your web service. all you would need to do would be to return Json, xml or whatever else your client expects in an action result.
If you did this, then I would suggest you implement your business logic in a class much like the one you have posted in your question. This class should have no knowledge of you authentication or access level requirements and should concentrate solely on implementing the required business logic and producing correct results.
You could then write a custom action filter for your action methods which could inspect the action parameter and determine if the caller is authenticated and authorized to actually access the method. Please see here for how to write a custom action filter.
If you think this sounds like what you want and my assumptions are correct, let me know and I will be happy to post some code to capture what I have described above.
If I have gone off on a tangent, please clarify the questions and we might be one step closer to suggesting a solution.
p.s. An AOP 'way of thinking' is what you need. PostSharp as an AOP tool is great, but I doubt there is anything postsharp will do for you here that you cannot achieve with a slightly different architecture and proper use of the features of asp.net mvc.
first create an attribute by inheriting from ActionFilterAttribute (system.web.mvc)
then override OnActionExecuting method and check if user has permission or not
this the example
public class CheckLoginAttribute : ActionFilterAttribute
{
public override void OnActionExecuting(ActionExecutingContext filterContext)
{
if (!Membership.IslogedIn)
{
filterContext.Result = new RedirectToRouteResult(new RouteValueDictionary
{
{ "area",""},
{ "action", "login" },
{ "controller", "user" },
{ "redirecturl",filterContext.RequestContext.HttpContext.Request.RawUrl}
});
}
}
}
and then, use this attribute for every method you need to check user permission
public class Test
{
[ChecklLogin]
public void Test()
{
}
[ChecklLogin]
public int Test2(int i)
{
return i
}
[ChecklLogin]
public void Test3(string i)
{
}
}
Scenario
We're developing a new MVC web project and we're trying to adhere to the Skinny Controller pattern as described in this article http://codebetter.com/iancooper/2008/12/03/the-fat-controller/
As part of one of our actions we are retrieving some navigation data (Menu structure) from the Cache.
Problem
I order to maintain the skinny controller pattern we'd like to have the cache check call in the ViewModel, which we have tried and we know works, using the following code.
var cachedCategories = (List<Category>)HttpContext.Current.Cache["Categories"];
if (cachedCategories == null) {
cachedCategories = _service.GetCategories().ToList<Category>();
HttpContext.Current.Cache["Categories"] = cachedCategories;
}
However when it comes to unit testing it we hit a problem. Since we are not directly passing the HttpContext into the ViewModel we have no idea how to go about mocking the HttpContext.
We're using Moq and while we have some options (one is to pass the context from the controller to the viewmodel at instantiation) those options require changing the code purely to make the tests work.
Does anyone have any suggestions?
mock the HttpContext is a huge work as it is one of the biggest object you will see in all your life so probably is better don't mock it.(http://volaresystems.com/Blog/post/Dont-mock-HttpContext.aspx)
Anyway you could use the one in MVCcontrib (http://www.codeplex.com/mvcContrib) the file MvcMockHelps shows how it is done.
Ultimately we choose to modify our code to allow for easier testing.
We accomplished this by passing the HttpContext to the ViewModel at instantiation as I mentioned in my original question.
I have a method that takes 5 parameters. This method is used to take a bunch of gathered information and send it to my server.
I am writing a unit test for this method, but I am hitting a bit of a snag. Several of the parameters are Lists<> of classes that take some doing to setup correctly. I have methods that set them up correctly in other units (production code units). But if I call those then I am kind of breaking the whole idea of a unit test (to only hit one "unit").
So.... what do I do? Do I duplicate the code that sets up these objects in my Test Project (in a helper method) or do I start calling production code to setup these objects?
Here is hypothetical example to try and make this clearer:
File: UserDemographics.cs
class UserDemographics
{
// A bunch of user demographic here
// and values that get set as a user gets added to a group.
}
File: UserGroups.cs
class UserGroups
{
// A bunch of variables that change based on
// the demographics of the users put into them.
public AddUserDemographicsToGroup(UserDemographcis userDemographics)
{}
}
File: UserSetupEvent.cs
class UserSetupEvent
{
// An event to record the registering of a user
// Is highly dependant on UserDemographics and semi dependant on UserGroups
public SetupUserEvent(List<UserDemographics> userDemographics,
List<UserGroup> userGroups)
{}
}
file: Communications.cs
class Communications
{
public SendUserInfoToServer(SendingEvent sendingEvent,
List<UserDemographics> userDemographics,
List<UserGroup> userGroups,
List<UserSetupEvent> userSetupEvents)
{}
}
So the question is: To unit test SendUserInfoToServer should I duplicate SetupUserEvent and AddUserDemographicsToGroup in my test project, or should I just call them to help me setup some "real" parameters?
You need test duplicates.
You're correct that unit tests should not call out to other methods, so you need to "fake" the dependencies. This can be done one of two ways:
Manually written test duplicates
Mocking
Test duplicates allow you to isolate your method under test from its dependencies.
I use Moq for mocking. Your unit test should send in "dummy" parameter values, or statically defined values you can use to test control flow:
public class MyTestObject
{
public List<Thingie> GetTestThingies()
{
yield return new Thingie() {id = 1};
yield return new Thingie() {id = 2};
yield return new Thingie() {id = 3};
}
}
If the method calls out to any other classes/methods, use mocks (aka "fakes"). Mocks are dynamically-generated objects based on virtual methods or interfaces:
Mock<IRepository> repMock = new Mock<IRepository>();
MyPage obj = new MyPage() //let's pretend this is ASP.NET
obj.IRepository = repMock.Object;
repMock.Setup(r => r.FindById(1)).Returns(MyTestObject.GetThingies().First());
var thingie = MyPage.GetThingie(1);
The Mock object above uses the Setup method to return the same result for the call defined in the r => r.FindById(1) lambda. This is called an expecation. This allows you to test only the code in your method, without actually calling out to any dependent classes.
Once you've set up your test this way, you can use Moq's features to confirm that everything happened the way it was supposed to:
//did we get the instance we expected?
Assert.AreEqual(thingie.Id, MyTestObject.GetThingies().First().Id);
//was a method called?
repMock.Verify(r => r.FindById(1));
The Verify method allows you to test whether a method was called. Together, these facilities allow you focus your unit tests on a single method at a time.
Sounds like your units are too tightly coupled (at least from a quick view at your problem). What makes me curious is for instance the fact that your UserGroups takes a UserDemographics and your UserSetupEvent takes a list of UserGroup including a list of UserDemographics (again). Shouldn't the List<UserGroup> already include the ÙserDemographics passed in it's constructor or am I misunderstanding it?
Somehow it seems like a design problem of your class model which in turn makes it difficult to unit test. Difficult setup procedures are a code smell indicating high coupling :)
Bringing in interfaces is what I would prefer. Then you can mock the used classes and you don't have to duplicate code (which violates the Don't Repeat Yourself principle) and you don't have to use the original implementations in the unit tests for the Communications class.
You should use mock objects, basically your unit test should probably just generate some fake data that looks like real data instead of calling into the real code, this way you can isolate the test and have predictable test results.
You can make use of a tool called NBuilder to generate test data. It has a very good fluent interface and is very easy to use. If your tests need to build lists this works even better. You can read more about it here.
Consider a following chunk of service:
public class ProductService : IProductService {
private IProductRepository _productRepository;
// Some initlization stuff
public Product GetProduct(int id) {
try {
return _productRepository.GetProduct(id);
} catch (Exception e) {
// log, wrap then throw
}
}
}
Let's consider a simple unit test:
[Test]
public void GetProduct_return_the_same_product_as_getProduct_on_productRepository() {
var product = EntityGenerator.Product();
_productRepositoryMock.Setup(pr => pr.GetProduct(product.Id)).Returns(product);
Product returnedProduct = _productService.GetProduct(product.Id);
Assert.AreEqual(product, returnedProduct);
_productRepositoryMock.VerifyAll();
}
At first it seems that this test is ok. But let's change our service method a little bit:
public Product GetProduct(int id) {
try {
var product = _productRepository.GetProduct(id);
product.Owner = "totallyDifferentOwner";
return product;
} catch (Exception e) {
// log, wrap then throw
}
}
How to rewrite a given test that it'd pass with the first service method and fail with a second one?
How do you handle this kind of simple scenarios?
HINT 1: A given test is bad coz product and returnedProduct is actually the same reference.
HINT 2: Implementing equality members (object.equals) is not the solution.
HINT 3: As for now, I create a clone of the Product instance (expectedProduct) with AutoMapper - but I don't like this solution.
HINT 4: I'm not testing that the SUT does NOT do sth. I'm trying to test that SUT DOES return the same object as it is returned from repository.
Personally, I wouldn't care about this. The test should make sure that the code is doing what you intend. It's very hard to test what code is not doing, I wouldn't bother in this case.
The test actually should just look like this:
[Test]
public void GetProduct_GetsProductFromRepository()
{
var product = EntityGenerator.Product();
_productRepositoryMock
.Setup(pr => pr.GetProduct(product.Id))
.Returns(product);
Product returnedProduct = _productService.GetProduct(product.Id);
Assert.AreSame(product, returnedProduct);
}
I mean, it's one line of code you are testing.
Why don't you mock the product as well as the productRepository?
If you mock the product using a strict mock, you will get a failure when the repository touches your product.
If this is a completely ridiculous idea, can you please explain why? Honestly, I'd like to learn.
One way of thinking of unit tests is as coded specifications. When you use the EntityGenerator to produce instances both for the Test and for the actual service, your test can be seen to express the requirement
The Service uses the EntityGenerator to produce Product instances.
This is what your test verifies. It's underspecified because it doesn't mention if modifications are allowed or not. If we say
The Service uses the EntityGenerator to produce Product instances, which cannot be modified.
Then we get a hint as to the test changes needed to capture the error:
var product = EntityGenerator.Product();
// [ Change ]
var originalOwner = product.Owner;
// assuming owner is an immutable value object, like String
// [...] - record other properties as well.
Product returnedProduct = _productService.GetProduct(product.Id);
Assert.AreEqual(product, returnedProduct);
// [ Change ] verify the product is equivalent to the original spec
Assert.AreEqual(originalOwner, returnedProduct.Owner);
// [...] - test other properties as well
(The change is that we retrieve the owner from the freshly created Product and check the owner from the Product returned from the service.)
This embodies the fact that the Owner and other product properties must equal the the original value from the generator. This may seem like I'm stating the obvious, since the code is pretty trivial, but it runs quite deep if you think in terms of requirement specifications.
I often "test my tests" by stipulating "if I change this line of code, tweak a critical constant or two, or inject a few code burps (e.g. changing != to ==), which test will capture the error?" Doing it for real finds if there is a test that captures the problem. Sometimes not, in which case it's time to look at the requirements implicit in the tests, and see how we can tighten them up. In projects with no real requirements capture/analysis this can be a useful tool to toughen up tests so they fail when unexpected changes occur.
Of course, you have to be pragmatic. You can't reasonably expect to handle all changes - some will simply be absurd and the program will crash. But logical changes like the Owner change are good candidates for test strengthening.
By dragging talk of requirements into a simple coding fix, some may think I've gone off the deep end, but thorough requirements help produce thorough tests, and if you have no requirements, then you need to work doubly hard to make sure your tests are thorough, since you're implicitly doing requirements capture as you write the tests.
EDIT: I'm answering this from within the contraints set in the question. Given a free choice, I would suggest not using the EntityGenerator to create Product test instances, and instead create them "by hand" and use an equality comparison. Or more direct, compare the fields of the returned Product to specific (hard-coded) values in the test, again, without using the EntityGenerator in the test.
Q1: Don't make changes to code then write a test. Write a test first for the expected behavior. Then you can do whatever you want to the SUT.
Q2: You don't make the changes in your Product Gateway to change the owner of the product. You make the change in your model.
But if you insist, then listen to your tests. They are telling you that you have the possibility for products to be pulled from the gateway that have the incorrect owners. Oops, Looks like a business rule. Should be tested for in the model.
Also your using a mock. Why are you testing an implementation detail? The gateway only cares that the _productRepository.GetProduct(id) returns a product. Not what the product is.
If you test in this manner you will be creating fragile tests. What if product changes further. Now you have failing tests all over the place.
Your consumers of product (MODEL) are the only ones that care about the implementation of Product.
So your gateway test should look like this:
[Test]
public void GetProduct_return_the_same_product_as_getProduct_on_productRepository() {
var product = EntityGenerator.Product();
_productRepositoryMock.Setup(pr => pr.GetProduct(product.Id)).Returns(product);
_productService.GetProduct(product.Id);
_productRepositoryMock.VerifyAll();
}
Don't put business logic where it doesn't belong! And it's corollary is don't test for business logic where there should be none.
If you really want to guarantee that the service method doesn't change the attributes of your products, you have two options:
Define the expected product attributes in your test and assert that the resulting product matches these values. (This appears to be what you're doing now by cloning the object.)
Mock the product and specify expectations to verify that the service method does not change its attributes.
This is how I'd do the latter with NMock:
// If you're not a purist, go ahead and verify all the attributes in a single
// test - Get_Product_Does_Not_Modify_The_Product_Returned_By_The_Repository
[Test]
public Get_Product_Does_Not_Modify_Owner() {
Product mockProduct = mockery.NewMock<Product>(MockStyle.Transparent);
Stub.On(_productRepositoryMock)
.Method("GetProduct")
.Will(Return.Value(mockProduct);
Expect.Never
.On(mockProduct)
.SetProperty("Owner");
_productService.GetProduct(0);
mockery.VerifyAllExpectationsHaveBeenMet();
}
My previous answer stands, though it assumes the members of the Product class that you care about are public and virtual. This is not likely if the class is a POCO / DTO.
What you're looking for might be rephrased as a way to do comparison of the values (not instance) of the object.
One way to compare to see if they match when serialized. I did this recently for some code... Was replacing a long parameter list with a parameterized object. The code is crufty, I don't want to refactor it though as its going away soon anyhow. So I just do this serialization comparison as a quick way to see if they have the same value.
I wrote some utility functions... Assert2.IsSameValue(expected,actual) which functions like NUnit's Assert.AreEqual(), except it serializes via JSON before comparing. Likewise, It2.IsSameSerialized() can be used to describe parameters passed to mocked calls in a manner similar to Moq.It.Is().
public class Assert2
{
public static void IsSameValue(object expectedValue, object actualValue) {
JavaScriptSerializer serializer = new JavaScriptSerializer();
var expectedJSON = serializer.Serialize(expectedValue);
var actualJSON = serializer.Serialize(actualValue);
Assert.AreEqual(expectedJSON, actualJSON);
}
}
public static class It2
{
public static T IsSameSerialized<T>(T expectedRecord) {
JavaScriptSerializer serializer = new JavaScriptSerializer();
string expectedJSON = serializer.Serialize(expectedRecord);
return Match<T>.Create(delegate(T actual) {
string actualJSON = serializer.Serialize(actual);
return expectedJSON == actualJSON;
});
}
}
Well, one way is to pass around a mock of product rather than the actual product. Verify nothing to affect the product by making it strict. (I assume you are using Moq, it looks like you are)
[Test]
public void GetProduct_return_the_same_product_as_getProduct_on_productRepository() {
var product = new Mock<EntityGenerator.Product>(MockBehavior.Strict);
_productRepositoryMock.Setup(pr => pr.GetProduct(product.Id)).Returns(product);
Product returnedProduct = _productService.GetProduct(product.Id);
Assert.AreEqual(product, returnedProduct);
_productRepositoryMock.VerifyAll();
product.VerifyAll();
}
That said, I'm not sure you should be doing this. The test is doing to much, and might indicate there is another requirement somewhere. Find that requirement and create a second test. It might be that you just want to stop yourself from doing something stupid. I don't think that scales, because there are so many stupid things you can do. Trying to test each would take too long.
I'm not sure, if the unit test should care about "what given method does not". There are zillion steps which are possible. In strict the test "GetProduct(id) return the same product as getProduct(id) on productRepository" is correct with or without the line product.Owner = "totallyDifferentOwner".
However you can create a test (if is required) "GetProduct(id) return product with same content as getProduct(id) on productRepository" where you can create a (propably deep) clone of one product instance and then you should compare contents of the two objects (so no object.Equals or object.ReferenceEquals).
The unit tests are not guarantee for 100% bug free and correct behaviour.
You can return an interface to product instead of a concrete Product.
Such as
public IProduct GetProduct(int id)
{
return _productRepository.GetProduct(id);
}
And then verify the Owner property was not set:
Dep<IProduct>().AssertWasNotCalled(p => p.Owner = Arg.Is.Anything);
If you care about all the properties and or methods, then there is probably a pre-existing way with Rhino. Otherwise you can make an extension method that probably uses reflection such as:
Dep<IProduct>().AssertNoPropertyOrMethodWasCalled()
Our behaviour specifications are like so:
[Specification]
public class When_product_service_has_get_product_called_with_any_id
: ProductServiceSpecification
{
private int _productId;
private IProduct _actualProduct;
[It]
public void Should_return_the_expected_product()
{
this._actualProduct.Should().Be.EqualTo(Dep<IProduct>());
}
[It]
public void Should_not_have_the_product_modified()
{
Dep<IProduct>().AssertWasNotCalled(p => p.Owner = Arg<string>.Is.Anything);
// or write your own extension method:
// Dep<IProduct>().AssertNoPropertyOrMethodWasCalled();
}
public override void GivenThat()
{
var randomGenerator = new RandomGenerator();
this._productId = randomGenerator.Generate<int>();
Stub<IProductRepository, IProduct>(r => r.GetProduct(this._productId));
}
public override void WhenIRun()
{
this._actualProduct = Sut.GetProduct(this._productId);
}
}
Enjoy.
If all consumers of ProductService.GetProduct() expect the same result as if they had asked it from the ProductRepository, why don't they just call ProductRepository.GetProduct() itself ?
It seems you have an unwanted Middle Man here.
There's not much value added to ProductService.GetProduct(). Dump it and have the client objects call ProductRepository.GetProduct() directly. Put the error handling and logging into ProductRepository.GetProduct() or the consumer code (possibly via AOP).
No more Middle Man, no more discrepancy problem, no more need to test for that discrepancy.
Let me state the problem as I see it.
You have a method and a test method. The test method validates the original method.
You change the system under test by altering the data. What you want to see is that the same unit test fails.
So in effect you're creating a test that verifies that the data in the data source matches the data in your fetched object AFTER the service layer returns it. That probably falls under the class of "integration test."
You don't have many good options in this case. Ultimately, you want to know that every property is the same as some passed-in property value. So you're forced to test each property independently. You could do this with reflection, but that won't work well for nested collections.
I think the real question is: why test your service model for the correctness of your data layer, and why write code in your service model just to break the test? Are you concerned that you or other users might set objects to invalid states in your service layer? In that case you should change your contract so that the Product.Owner is readonly.
You'd be better off writing a test against your data layer to ensure that it fetches data correctly, then use unit tests to check the business logic in your service layer. If you're interested in more details about this approach reply in the comments.
Having look on all 4 hints provided it seems that you want to make an object immutable at runtime. C# language does no support that. It is possible only with refactoring the Product class itself. For refactoring you can take IReadonlyProduct approach and protect all setters from being called. This however still allows modification of elements of containers like List<> being returned by getters. ReadOnly collection won't help either. Only WPF lets you change immutability at runtime with Freezable class.
So I see the only proper way to make sure objects have same contents is by comparing them. Probably the easiest way would be to add [Serializable] attribute to all involved entities and do the serialization-with-comparison as suggested by Frank Schwieterman.