I have a website built in ASP.NET MVC that uses an XML file as the backbone for its menu system. The nodes in that XML file list the menu items, their display text, and the controller name and action method that is called. I came up with an idea that if I could call a controller method with the controller name and action method name I could programmatically test all of the methods to see if they throw errors. I don't want to display the page in the browser; just run the controller method and any errors will be logged in my database. I would like to do it with something like this code from a Test model class, but I can't get this to compile yet. The RenderAction is a method that is working in my View pages, but the model says that method doesn't exist. Can somebody guide me how to get this to work?
HtmlHelper oHtmlHelper = new HtmlHelper(oViewContext, oViewDataContainer);
oHtmlHelper.RenderAction(sController, sAction);
The whole point of testing is to drive you to better, more maintainable code. Part of that is unit testing, which by nature, tests discreet units of functionality. What you have here is a situation that is absolute begging to be refactored. When things are difficult to test, that's a sure sign that your design is flawed.
Instead of having all this logic in an action, and then attempt to test the whole action rendering logic to determine if it's working or not, break out the logic into a helper class. Then, you should be able to easily test whether a method in that class returns a good result.
I'm not sure what you mean by "complex unit testing structure", as unit tests are inherently simple and pretty much everything you need is baked into .NET (although better options often exist as third-party libraries). Regardless, all you need to get started unit testing is to create a unit test project, add a reference to the project you want to test on, and then add a simple class like:
[TestClass]
public class MyAwesomeTests
{
[TestMethod]
public void TestSomething()
{
...
}
}
Related
I have an existing ASP.NET MVC app and wanted to create some unit tests and I quickly ran into the problem below. Is there some sort of way to use MOQ and say 'When the private method GETCLIENTIP is run then return 'xxx'')?
Since right now it is uses parts of HttpContext that of course the unit test does not have.
public HttpResponseMessage Post([FromBody]TriageCase TriageCase)
{
if (ModelState.IsValid)
{
//Get the IP address from the request
TriageCase.ipAddress = this.GetClientIP(Request);
_log.Info("IP Address = " + TriageCase.ipAddress);
}
}
public void Verify_Not_A_Suicide()
{
TriageCaseRepository repository = new TriageCaseRepository();
var controller = new TriageCasesController(repository);
//This will not work because I must mock a private method in the controller?
HttpResponseMessage result = controller.Post(new TriageCase());
}
There are several ways to do this, right from nasty complicated ones to simple traded-off ones. here are some:
you could make the GetClientIP method agnostic of HttpContext. and then make it internal. mark the controller assembly with InternalsVisibleTo and put the unit test assembly path.
making the method agnostic of HttpContext saves you from having a HttpContextBase (the abstract http context class from 3.5 onwards for to enable testing) and provide mocking etc. (btw, you should think about it. especially for MVC) pass the specific string as the parameter to the method. e.g. The specific server variable string.
you could make the GetClientIP method receive HttpContextBase as the parameter and then make it internal. mark the controller assembly with InternalsVisibleTo and put the unit test assembly path.
in your controller action, you need to call this method as this.GetClientIP(new HttpContextWrapper(HttpContext.Current))
your unit tests can set mockable context. and whats more, you can set expectations on the context as to if the Request property was called, or if the ip address related server variable call was made. (not to mention the straight ip address value verification)
you could use FakeItEasy or Microsoft Moles etc. to create private accessors for private methods. i normally refrain from that.
you could write an interface base INetworkUtility which has a method to give you the IP Address. your controller could use this interface. and it could be tested in isolation as well.
you could have a public helper class to get the ip address, which can be unit tested.
as you can see, every solution has some trade-off you need to do.
getting IP Address from the Request object is an isolated piece of logic irrespective of mvc or web api or asp web forms. (still web specific though) so it doesn't harm to have it as helper methods or interface based helper methods.
personally, i prefer the Internal approach, (since it is almost private) and doesn't need much code change.
In proper TDD fashion you don't Unit Test "private" or "internal" methods. Only the public interface is used within Unit Tests. If you Unit Test a private/internal method then you are tying your unit tests too tightly to that specific implementation.
What should be done instead is use Dependency Injection to inject a class/interface that implements the "dependent" functionality that you are needing to unit test. This will help further modularize your code thus making it easier to maintain.
I general don't try to test private methods. It just gets messy - test the input and output of an action...
Either do the Dependency Injection thing or think about a "test double" or "accessor" class...
For a test double - make the private method a protected and then you inherit the controller and manually mock out inputs or outputs as required. I am not stating whether this is better or worse than Ioc, I am saying this is another way to do it.
protected virtual string ExecuteIpnResponse(string url)
{
var ipnClient = new WebClient();
var ipnResponse = ipnClient.DownloadString(url);
return ipnResponse;
}
I did a post on this testing style recently for checking a paypal call.
http://viridissoftware.wordpress.com/2014/07/29/paypal-ipn-payment-notification-example-in-c/
In our MVC project we are attempting to make everything as generic as possible.
Because of this we want to have one authentication class/method which covers all our methods.
As a example: The following code is a MVC class which can be called to from a client
public class Test
{
public void Test()
{
}
public int Test2(int i)
{
return i
}
public void Test3(string i)
{
}
}
A customer of our webservice can use a service reference to get access to Test(), Test2() and Test3().
Now i'm searching for a class, model, interface or anything else which I can use to alter the access to the method (Currently using [PrincipalPermission] attribute) as well as alter the parameter value.
Example:
Customer A calls Test2(150)
The class/method checks whether Customer A has access to Test2. The class/method validates the user but notices that the user does not have access to 150. He only has access to 100.So the class/method sets the parameter to 100 and lets it follow through on it's journey.
Customber B class Test()
The class/method checks whether Customer B has access to Test. After validation it shows that the user does not have access so it throws a SecurityException.
My question:
In what class, interface, attribute or whatever can I best do this?
(ps. As example i've only used authentication and parameter handling, but we plan to do a lot more in this stage.)
Edit
I notice most, if not all, assume I'm using actionResults. So i'd like to state that this is used in a webservice where we provide our customers with information from our database. In no way will we come in contact with a ActionResult during the requests to our webservice. (Atleast, not our customers)
Authentication can also be done through an aspect. The aspect oriented paradigm is designed to honor those so-called cross-cutting concerns. Cross-cutting concerns implemented in the "old-fashioned" oo-way make your business logic harder to read (like in Nick's example above) or even worse to understand, because they don't bring any "direct" benefit to your code:
public ActionResult YourAction(int id) {
if (!CustomerCanAccess(id)) {
return new HttpUnauthorizedResult();
}
/* the rest of your code */
}
The only thing you want here is /* the rest of your code */ and nothing more.
Stuff like logging, exception handling, caching and authorization for example could be implemented as an aspect and thus be maintained at one single point.
PostSharp is an example for an aspect-oriented C# framework. With PostSharp you could create a custom aspect and then annotate your method (like you did with the PrincipalPermissionAttribute). PostSharp will then weave your aspect code into your code during compilation. With the use of PostSharp aspects it would be possible to hook into the method invocation authenticating the calling user, changing method parameters or throw custom exceptions (See this blog post for a brief explanation how this is implemented).
There isn't a built-in attribute that handles this scenario.
I find it's usually best to just do something like this:
public ActionResult YourAction(int id) {
if (!CustomerCanAccess(id)) {
return new HttpUnauthorizedResult();
}
/* the rest of your code */
}
This is as simple as it gets and easy to extend. I think you'll find that in many cases this is all you need. It also keeps your security assertions testable. You can write a unit test that simply calls the method (without any MVC plumbing), and checks whether the caller was authorized or not.
Note that if you are using ASP.Net Forms Authentication, you may also need to add:
Response.SuppressFormsAuthenticationRedirect = true;
if you don't want your users to be redirected to the login page when they attempt to access a resource for which they are not authorized.
Here's how I've made my life simpler.
Never use simple values for action arguments. Always create a class that represents the action arguments. Even if there's only one value. I've found that I usually end up being able to re-use this class.
Make sure that all of teh properties of this class are nullable (this keeps you from running into default values (0 for integers) being automatically filles out) and thatallowable ranges are defined (this makes sure you don't worry about negative numbers)
Once you have a class that represents your arguments, throwing a validator onto a property ends up being trivial.
The thing is that you're not passing a meaningless int. It has a purpose, it could be a product number, an account number, etc. Create a class that has that as a property (e.g An AccountIdentifier class with a single field called 'id). Then all you have to do is create a [CurrentUsedCanAccessAccountId] attribute and place it on that property.
All your controller has to do is check whether or not ModelState.IsValid and you're done.
There are more elegant solutions out there, such as adding an action filter to the methods that would automatically re-direct based on whether or not the user has access to a specific value for the parameter, but this will work rather well
First, just to say it, that your own methods are probably the most appropriate place to handle input values (adjust/discard) - and with the addition of Authorize and custom filter actions you can get most done, and the 'MVC way'. You could also go the 'OO way' and have your ITest interface, dispatcher etc. (you get more compiler support - but it's more coupled). However, let's just presume that you need something more complex...
I'm also assuming that your Test is a controller - and even if it isn't it can be made part of the 'pipeline' (or by mimicking what MVC does), And with MVC in mind...
One obvious solution would be to apply filters, or action filters via
ActionFilterAttribute
Class
(like Authorize etc.) - by creating your own custom attribute and
overriding OnActionExecuting etc.
And while that is fine, it's not going to help much with parameters manipulation as you'd have to specify the code 'out of place' - or somehow inject delegates, lambda expressions for each attribute.
It is basically an interceptor of some sort that you need - which allows you to attach your own processing. I've done something similar - but this guy did a great job explaining and implementing a solution - so instead of me repeating most of that I'd suggest just to read through that.
ASP.NET MVC controller action with Interceptor pattern (by Amar, I think)
What that does is to use existing MVC mechanisms for filters - but it exposes it via a different 'interface' - and I think it's much easier dealing with inputs. Basically, what you'd do is something like...
[ActionInterceptor(InterceptionOrder.Before, typeof(TestController), "Test1")]
public void OnTest1(InterceptorParasDictionary<string, object> paras, object result)
The parameters and changes are propagated, you have a context of a sort so you can terminate further execution - or let both methods do their work etc.
What's also interesting - is the whole pattern - which is IOC of a
sort - you define the intercepting code in another class/controller
all together - so instead of 'decorating' your own Test methods -
attributes and most of the work are placed outside.
And to change your parameters you'd do something like...
// I'd create/wrap my own User and make this w/ more support interfaces etc.
if (paras.Count > 0 && Context.User...)
{
(paras["id"] as int) = 100;
}
And I'm guessing you could further change the implementation for your own case at hand.
That's just a rough design - I don't know if the code there is ready for production (it's for MVC3 but things are similar if not the same), but it's simplistic enough (when explained) and should work fine with some minor adjustments on your side.
I'm not sure if I understood your question, but it looks like a model binder can help.
Your model binder can have an interface injected that is responsible for determining if a user has permissions or not to a method, and in case it is needed it can change the value provided as a parameter.
ValueProviders, that implement the interface IValueProvider, may also be helpful in your case.
I believe the reason you haven't gotten ay good enough answer is because there are a few ambiguities in your question.
First, you say you have an MVC class that is called from a client and yet you say there are no ActionResults. So you would do well to clarify if you are using asp.net mvc framework, web api, wcf service or soap (asmx) web service.
If my assumption is right and you are using asp.net mvc framework, how are you defining web services without using action results and how does your client 'call' this service.
I am not saying it is impossible or that what you may have done is wrong, but a bit more clarity (and code) would help.
My advice if you are using asp.net mvc3 would be to design it so that you use controllers and actions to create your web service. all you would need to do would be to return Json, xml or whatever else your client expects in an action result.
If you did this, then I would suggest you implement your business logic in a class much like the one you have posted in your question. This class should have no knowledge of you authentication or access level requirements and should concentrate solely on implementing the required business logic and producing correct results.
You could then write a custom action filter for your action methods which could inspect the action parameter and determine if the caller is authenticated and authorized to actually access the method. Please see here for how to write a custom action filter.
If you think this sounds like what you want and my assumptions are correct, let me know and I will be happy to post some code to capture what I have described above.
If I have gone off on a tangent, please clarify the questions and we might be one step closer to suggesting a solution.
p.s. An AOP 'way of thinking' is what you need. PostSharp as an AOP tool is great, but I doubt there is anything postsharp will do for you here that you cannot achieve with a slightly different architecture and proper use of the features of asp.net mvc.
first create an attribute by inheriting from ActionFilterAttribute (system.web.mvc)
then override OnActionExecuting method and check if user has permission or not
this the example
public class CheckLoginAttribute : ActionFilterAttribute
{
public override void OnActionExecuting(ActionExecutingContext filterContext)
{
if (!Membership.IslogedIn)
{
filterContext.Result = new RedirectToRouteResult(new RouteValueDictionary
{
{ "area",""},
{ "action", "login" },
{ "controller", "user" },
{ "redirecturl",filterContext.RequestContext.HttpContext.Request.RawUrl}
});
}
}
}
and then, use this attribute for every method you need to check user permission
public class Test
{
[ChecklLogin]
public void Test()
{
}
[ChecklLogin]
public int Test2(int i)
{
return i
}
[ChecklLogin]
public void Test3(string i)
{
}
}
I wanna test the controller action, but one point is not coveraged by visual studio Code coverage tool.
public ActionResult Activate(int? id)
{
if (id == null)
return View("PageNotFound");
var city = repository.GetCityById(id.Value);
if (city == null)
return View("PageNotFound");
city.IsActive = !city.IsActive;
if (TryUpdateModel(city))
{
repository.Save();
return RedirectToAction("MyCities");
}
***return View("PageNotFound");***
}
in the code coverage, *return View("PageNotFound");* is not coveraged.
Because, I can not simulate the TryUpdateModel false stuation. TryUpdateModel can get false if model can not updated. Can u help about this?
TryUpdateModel will return false if the model validation fails.
(in which case you should not be showing a not found page)
In situations like these there are all kinds of options for you. One of them is creating a stub that overrides the actual functionality of the method you want to control.
For example, you can declare TryUpdateModel as virtual. In your unit test, instead of working with the original class, you inherit it and override the TryGetModel to simply return false. All other functionality is retained as is.
Now you call the Activate method on the derived class that has the exact same functionality apart from the simulated TryUpdateModel method, allowing you to test the desired use case without breaking your head as how to simulate a certain execution path.
This technique is not without drawbacks: it makes you declare the method as virtual only for testing purposes, thus preventing you from making it sealed, or static. There are other, more advanced techniques (Mock objects, isolation frameworks) but I think that this is a good enough solution for this scenario.
You don't include code to show how the lifetime of the cargo dependency is handled, so I'm only guessing, but...
It should be possible to pass a mock or fake instance of this class to the controller as part of your test setup (if it isn't currently possible, refactor until it is).
With the mock in place you can isolate and test the behaviour as you want.
Aside: I disagree with the comment from #dreza... testing this sort of business logic is very important.
I would also caution against faking the implementation of TryUpdateModel as suggested by #Vitaliy, after all you want to test what happens if the cargo model is invalid (not just pretend that it is by providing a new version of core code).
OK,
I solved solution. I deleted the TryUpdatemodel if condition. Directly used repository.Save() method. Because I did not send parameter a model to activate method.
I'm attempting to learn TDD but having a hard time getting my head around what / how to test with a little app I need to write.
The (simplified somewhat) spec for the app is as follows:
It needs to take from the user the location of a csv file, the location of a word document mailmerge template and an output location.
The app will then read the csv file and for each row, merge the data with the word template and output to the folder specified.
Just to be clear, I'm not asking how I would go about coding such an app as I'm confident that I know how to do it if I just went ahead and started. But if I wanted to do it using TDD, some guidance on the tests to write would be appreciated as I'm guessing I don't want to be testing reading a real csv file, or testing the 3rd party component that does the merge or converts to pdf.
I think just some general TDD guidance would be a great help!
I'd start out by thinking of scenarios for each step of your program, starting with failure cases and their expected behavior:
User provides a null csv file location (throws an ArgumentNullException).
User provides an empty csv file location (throws an ArgumentException).
The csv file specified by the user doesn't exist (whatever you think is appropriate).
Next, write a test for each of those scenarios and make sure it fails. Next, write just enough code to make the test pass. That's pretty easy for some of these conditions, because the code that makes your test pass is often the final code:
public class Merger {
public void Merge(string csvPath, string templatePath, string outputPath) {
if (csvPath == null) { throw new ArgumentNullException("csvPath"); }
}
}
After that, move into standard scenarios:
The specified csv file has one line (merge should be called once, output written to the expected location).
The specified csv file has two lines (merge should be called twice, output written to the expected location).
The output file's name conforms to your expectations (whatever those are).
And so on. Once you get to this second phase, you'll start to identify behavior you want to stub and mock. For example, checking whether a file exists or not - .NET doesn't make it easy to stub this, so you'll probably need to create an adapter interface and class that will let you isolate your program from the actual file system (to say nothing of actual CSV files and mail-merge templates). There are other techniques available, but this method is fairly standard:
public interface IFileFinder { bool FileExists(string path); }
// Concrete implementation to use in production
public class FileFinder: IFileFinder {
public bool FileExists(string path) { return File.Exists(path); }
}
public class Merger {
IFileFinder finder;
public Merger(IFileFinder finder) { this.finder = finder; }
}
In tests, you'll pass in a stub implementation:
[Test]
[ExpectedException(typeof(FileNotFoundException))]
public void Fails_When_Csv_File_Does_Not_Exist() {
IFileFinder finder = mockery.NewMock<IFileFinder>();
Merger merger = new Merger(finder);
Stub.On(finder).Method("FileExists").Will(Return.Value(false));
merger.Merge("csvPath", "templatePath", "outputPath");
}
Simple general guidance:
You write unit tests first. At the beginning
they all fail.
Then you go into the class under test
and write code until tests related to
each method pass.
Do this for every public method of
your types.
By writing units test you actually specify the requirements but in another form, easy to read code.
Looking to it from another angle: when you receive a new black boxed class and unit tests for it, you should read the unit tests to see what the class does and how it behaves.
To read more about unit testing I recommend a very good book: Art Of Unit Testing
Here are a couple links to articles on StackOverflow regarding TDD for more details and examples:
Link1
Link2
To be able to unit test you need to decouple the class from any dependencies so you can effectively just test the class itself.
To do this you'll need to inject any dependencies into the class. You would typically do this by passing in an object that implements the dependency interface, into your class in the constructor.
Mocking frameworks are used to create a mock instance of your dependency that your class can call during the test. You define the mock to behave in the same way as your dependency would and then verify it's state at the end of the test.
I would recommend having a play with Rhino mocks and going through the examples in the documentation to get a feel for how this works.
http://ayende.com/projects/rhino-mocks.aspx
I have a method that takes 5 parameters. This method is used to take a bunch of gathered information and send it to my server.
I am writing a unit test for this method, but I am hitting a bit of a snag. Several of the parameters are Lists<> of classes that take some doing to setup correctly. I have methods that set them up correctly in other units (production code units). But if I call those then I am kind of breaking the whole idea of a unit test (to only hit one "unit").
So.... what do I do? Do I duplicate the code that sets up these objects in my Test Project (in a helper method) or do I start calling production code to setup these objects?
Here is hypothetical example to try and make this clearer:
File: UserDemographics.cs
class UserDemographics
{
// A bunch of user demographic here
// and values that get set as a user gets added to a group.
}
File: UserGroups.cs
class UserGroups
{
// A bunch of variables that change based on
// the demographics of the users put into them.
public AddUserDemographicsToGroup(UserDemographcis userDemographics)
{}
}
File: UserSetupEvent.cs
class UserSetupEvent
{
// An event to record the registering of a user
// Is highly dependant on UserDemographics and semi dependant on UserGroups
public SetupUserEvent(List<UserDemographics> userDemographics,
List<UserGroup> userGroups)
{}
}
file: Communications.cs
class Communications
{
public SendUserInfoToServer(SendingEvent sendingEvent,
List<UserDemographics> userDemographics,
List<UserGroup> userGroups,
List<UserSetupEvent> userSetupEvents)
{}
}
So the question is: To unit test SendUserInfoToServer should I duplicate SetupUserEvent and AddUserDemographicsToGroup in my test project, or should I just call them to help me setup some "real" parameters?
You need test duplicates.
You're correct that unit tests should not call out to other methods, so you need to "fake" the dependencies. This can be done one of two ways:
Manually written test duplicates
Mocking
Test duplicates allow you to isolate your method under test from its dependencies.
I use Moq for mocking. Your unit test should send in "dummy" parameter values, or statically defined values you can use to test control flow:
public class MyTestObject
{
public List<Thingie> GetTestThingies()
{
yield return new Thingie() {id = 1};
yield return new Thingie() {id = 2};
yield return new Thingie() {id = 3};
}
}
If the method calls out to any other classes/methods, use mocks (aka "fakes"). Mocks are dynamically-generated objects based on virtual methods or interfaces:
Mock<IRepository> repMock = new Mock<IRepository>();
MyPage obj = new MyPage() //let's pretend this is ASP.NET
obj.IRepository = repMock.Object;
repMock.Setup(r => r.FindById(1)).Returns(MyTestObject.GetThingies().First());
var thingie = MyPage.GetThingie(1);
The Mock object above uses the Setup method to return the same result for the call defined in the r => r.FindById(1) lambda. This is called an expecation. This allows you to test only the code in your method, without actually calling out to any dependent classes.
Once you've set up your test this way, you can use Moq's features to confirm that everything happened the way it was supposed to:
//did we get the instance we expected?
Assert.AreEqual(thingie.Id, MyTestObject.GetThingies().First().Id);
//was a method called?
repMock.Verify(r => r.FindById(1));
The Verify method allows you to test whether a method was called. Together, these facilities allow you focus your unit tests on a single method at a time.
Sounds like your units are too tightly coupled (at least from a quick view at your problem). What makes me curious is for instance the fact that your UserGroups takes a UserDemographics and your UserSetupEvent takes a list of UserGroup including a list of UserDemographics (again). Shouldn't the List<UserGroup> already include the ÙserDemographics passed in it's constructor or am I misunderstanding it?
Somehow it seems like a design problem of your class model which in turn makes it difficult to unit test. Difficult setup procedures are a code smell indicating high coupling :)
Bringing in interfaces is what I would prefer. Then you can mock the used classes and you don't have to duplicate code (which violates the Don't Repeat Yourself principle) and you don't have to use the original implementations in the unit tests for the Communications class.
You should use mock objects, basically your unit test should probably just generate some fake data that looks like real data instead of calling into the real code, this way you can isolate the test and have predictable test results.
You can make use of a tool called NBuilder to generate test data. It has a very good fluent interface and is very easy to use. If your tests need to build lists this works even better. You can read more about it here.