How should I assert in WebAPI integration test? - c#

I'm writing integration tests for Controller classes in my .NET Core WebAPI, and honestly, I'm not sure if Assert.Equal(HttpStatusCode.OK, response.StatusCode); is enough. I'm using my production Startup class with only database different (using InMemory database).
This is one of my tests, named ShouldAddUpdateAndDeleteUser. It's basically:
Send POST request with specific input.
Assert If POST worked. Because POST sends respond with created object, I can Assert on every property and if the Id is greater then 0.
Change the output a little bit and send Update request
Send GET request and assert if update worked.
Send DELETE request
Send GET request and assert if null.
Basically, I test, ADD,UPDATE,DELETE,GET (when item exists), GET (when item doesn't exists).
I have a few questions:
Is it a good practice to have such tests? It does do a lot, but it's not a unit test after all. If it fails, I can be pretty specific and specify which part didn't work. Worst case scenario I can debug it pretty quickly.
Is it integration tests or functional test or neither?
If this is wrong, how can I test DELETE or UPDATE? I'm kinda forced to call GET request after them (They return NoContent)
(Side note: It's not the only test I have for that controller obviously. I also have tests for GET all as well as BedRequest requests)

Is it a good practice to have such tests? It does do a lot, but it's not a unit test after all. If it fails, I can be pretty specific and specify which part didn't work. Worst case scenario I can debug it pretty quickly.
Testing your endpoints full stack does have value but make sure to keep in mind the testing pyramid. Its valuable to have a small number of full stack integration to validate the most important pathways/use cases in your application. Having too many however, can add a lot of extra time to your test runs which could impact release times if those tests are tied to releasing (which they should be).
Is it integration tests or functional test or neither?
Without seeing more of the code I would guess it's probably a bit of both. Functionally, you want to test the output of the endpoints and it seems as though you're testing some level of component integration if you're mocking out data in memory.
If this is wrong, how can I test DELETE or UPDATE? I'm kinda forced to call GET request after them (They return NoContent)
Like I said earlier, a few full stack tests are fine in my opinion as long as they provide value for major workflows in your application. I would suggest one to two integration tests that make database calls that make sure your application can stand up full stack but not bundle that requirement for your functional testing. In my experience you get way more value from modular component testing and a couple end to end functional tests.
I think the test flow you mentioned makes sense. I've done similar patterns throughout my services at work. Sometimes you can't realistically test one component without testing another like your example about getting after a delete.

Related

Should invalid cases be in one test? [duplicate]

What Makes a Good Unit Test? says that a test should test only one thing. What is the benefit from that?
Wouldn't it be better to write a bit bigger tests that test bigger block of code? Investigating a test failure is anyway hard and I don't see help to it from smaller tests.
Edit: The word unit is not that important. Let's say I consider the unit a bit bigger. That is not the issue here. The real question is why make a test or more for all methods as few tests that cover many methods is simpler.
An example: A list class. Why should I make separate tests for addition and removal? A one test that first adds then removes sounds simpler.
Testing only one thing will isolate that one thing and prove whether or not it works. That is the idea with unit testing. Nothing wrong with tests that test more than one thing, but that is generally referred to as integration testing. They both have merits, based on context.
To use an example, if your bedside lamp doesn't turn on, and you replace the bulb and switch the extension cord, you don't know which change fixed the issue. Should have done unit testing, and separated your concerns to isolate the problem.
Update: I read this article and linked articles and I gotta say, I'm shook: https://techbeacon.com/app-dev-testing/no-1-unit-testing-best-practice-stop-doing-it
There is substance here and it gets the mental juices flowing. But I reckon that it jibes with the original sentiment that we should be doing the test that context demands. I suppose I'd just append that to say that we need to get closer to knowing for sure the benefits of different testing on a system and less of a cross-your-fingers approach. Measurements/quantifications and all that good stuff.
I'm going to go out on a limb here, and say that the "only test one thing" advice isn't as actually helpful as it's sometimes made out to be.
Sometimes tests take a certain amount of setting up. Sometimes they may even take a certain amount of time to set up (in the real world). Often you can test two actions in one go.
Pro: only have all that setup occur once. Your tests after the first action will prove that the world is how you expect it to be before the second action. Less code, faster test run.
Con: if either action fails, you'll get the same result: the same test will fail. You'll have less information about where the problem is than if you only had a single action in each of two tests.
In reality, I find that the "con" here isn't much of a problem. The stack trace often narrows things down very quickly, and I'm going to make sure I fix the code anyway.
A slightly different "con" here is that it breaks the "write a new test, make it pass, refactor" cycle. I view that as an ideal cycle, but one which doesn't always mirror reality. Sometimes it's simply more pragmatic to add an extra action and check (or possibly just another check to an existing action) in a current test than to create a new one.
Tests that check for more than one thing aren't usually recommended because they are more tightly coupled and brittle. If you change something in the code, it'll take longer to change the test, since there are more things to account for.
[Edit:]
Ok, say this is a sample test method:
[TestMethod]
public void TestSomething() {
// Test condition A
// Test condition B
// Test condition C
// Test condition D
}
If your test for condition A fails, then B, C, and D will appear to fail as well, and won't provide you with any usefulness. What if your code change would have caused C to fail as well? If you had split them out into 4 separate tests, you would know this.
Haaa... unit tests.
Push any "directives" too far and it rapidly becomes unusable.
Single unit test test a single thing is just as good practice as single method does a single task. But IMHO that does not mean a single test can only contain a single assert statement.
Is
#Test
public void checkNullInputFirstArgument(){...}
#Test
public void checkNullInputSecondArgument(){...}
#Test
public void checkOverInputFirstArgument(){...}
...
better than
#Test
public void testLimitConditions(){...}
is question of taste in my opinion rather than good practice. I personally much prefer the latter.
But
#Test
public void doesWork(){...}
is actually what the "directive" wants you to avoid at all cost and what drains my sanity the fastest.
As a final conclusion, group together things that are semantically related and easilly testable together so that a failed test message, by itself, is actually meaningful enough for you to go directly to the code.
Rule of thumb here on a failed test report: if you have to read the test's code first then your test are not structured well enough and need more splitting into smaller tests.
My 2 cents.
Think of building a car. If you were to apply your theory, of just testing big things, then why not make a test to drive the car through a desert. It breaks down. Ok, so tell me what caused the problem. You can't. That's a scenario test.
A functional test may be to turn on the engine. It fails. But that could be because of a number of reasons. You still couldn't tell me exactly what caused the problem. We're getting closer though.
A unit test is more specific, and will firstly identify where the code is broken, but it will also (if doing proper TDD) help architect your code into clear, modular chunks.
Someone mentioned about using the stack trace. Forget it. That's a second resort. Going through the stack trace, or using debug is a pain and can be time consuming. Especially on larger systems, and complex bugs.
Good characteristics of a unit test:
Fast (milliseconds)
Independent. It's not affected by or dependent on other tests
Clear. It shouldn't be bloated, or contain a huge amount of setup.
Using test-driven development, you would write your tests first, then write the code to pass the test. If your tests are focused, this makes writing the code to pass the test easier.
For example, I might have a method that takes a parameter. One of the things I might think of first is, what should happen if the parameter is null? It should throw a ArgumentNull exception (I think). So I write a test that checks to see if that exception is thrown when I pass a null argument. Run the test. Okay, it throws NotImplementedException. I go and fix that by changing the code to throw an ArgumentNull exception. Run my test it passes. Then I think, what happens if it's too small or too big? Ah, that's two tests. I write the too small case first.
The point is I don't think of the behavior of the method all at once. I build it incrementally (and logically) by thinking about what it should do, then implement code and refactoring as I go to make it look pretty (elegant). This is why tests should be small and focused because when you are thinking about the behavior you should develop in small, understandable increments.
Having tests that verify only one thing makes troubleshooting easier. It's not to say you shouldn't also have tests that do test multiple things, or multiple tests that share the same setup/teardown.
Here should be an illustrative example. Let's say that you have a stack class with queries:
getSize
isEmpty
getTop
and methods to mutate the stack
push(anObject)
pop()
Now, consider the following test case for it (I'm using Python like pseudo-code for this example.)
class TestCase():
def setup():
self.stack = new Stack()
def test():
stack.push(1)
stack.push(2)
stack.pop()
assert stack.top() == 1, "top() isn't showing correct object"
assert stack.getSize() == 1, "getSize() call failed"
From this test case, you can determine if something is wrong, but not whether it is isolated to the push() or pop() implementations, or the queries that return values: top() and getSize().
If we add individual test cases for each method and its behavior, things become much easier to diagnose. Also, by doing fresh setup for each test case, we can guarantee that the problem is completely within the methods that the failing test method called.
def test_size():
assert stack.getSize() == 0
assert stack.isEmpty()
def test_push():
self.stack.push(1)
assert stack.top() == 1, "top returns wrong object after push"
assert stack.getSize() == 1, "getSize wrong after push"
def test_pop():
stack.push(1)
stack.pop()
assert stack.getSize() == 0, "getSize wrong after push"
As far as test-driven development is concerned. I personally write larger "functional tests" that end up testing multiple methods at first, and then create unit tests as I start to implement individual pieces.
Another way to look at it is unit tests verify the contract of each individual method, while larger tests verify the contract that the objects and the system as a whole must follow.
I'm still using three method calls in test_push, however both top() and getSize() are queries that are tested by separate test methods.
You could get similar functionality by adding more asserts to the single test, but then later assertion failures would be hidden.
If you are testing more than one thing then it is called an Integration test...not a unit test. You would still run these integration tests in the same testing framework as your unit tests.
Integration tests are generally slower, unit tests are fast because all dependencies are mocked/faked, so no database/web service/slow service calls.
We run our unit tests on commit to source control, and our integration tests only get run in the nightly build.
If you test more than one thing and the first thing you test fails, you will not know if the subsequent things you are testing pass or fail. It is easier to fix when you know everything that will fail.
Smaller unit test make it more clear where the issue is when they fail.
The GLib, but hopefully still useful, answer is that unit = one. If you test more than one thing, then you aren't unit testing.
Regarding your example: If you are testing add and remove in the same unit test, how do you verify that the item was ever added to your list? That is why you need to add and verify that it was added in one test.
Or to use the lamp example: If you want to test your lamp and all you do is turn the switch on and then off, how do you know the lamp ever turned on? You must take the step in between to look at the lamp and verify that it is on. Then you can turn it off and verify that it turned off.
I support the idea that unit tests should only test one thing. I also stray from it quite a bit. Today I had a test where expensive setup seemed to be forcing me to make more than one assertion per test.
namespace Tests.Integration
{
[TestFixture]
public class FeeMessageTest
{
[Test]
public void ShouldHaveCorrectValues
{
var fees = CallSlowRunningFeeService();
Assert.AreEqual(6.50m, fees.ConvenienceFee);
Assert.AreEqual(2.95m, fees.CreditCardFee);
Assert.AreEqual(59.95m, fees.ChangeFee);
}
}
}
At the same time, I really wanted to see all my assertions that failed, not just the first one. I was expecting them all to fail, and I needed to know what amounts I was really getting back. But, a standard [SetUp] with each test divided would cause 3 calls to the slow service. Suddenly I remembered an article suggesting that using "unconventional" test constructs is where half the benefit of unit testing is hidden. (I think it was a Jeremy Miller post, but can't find it now.) Suddenly [TestFixtureSetUp] popped to mind, and I realized I could make a single service call but still have separate, expressive test methods.
namespace Tests.Integration
{
[TestFixture]
public class FeeMessageTest
{
Fees fees;
[TestFixtureSetUp]
public void FetchFeesMessageFromService()
{
fees = CallSlowRunningFeeService();
}
[Test]
public void ShouldHaveCorrectConvenienceFee()
{
Assert.AreEqual(6.50m, fees.ConvenienceFee);
}
[Test]
public void ShouldHaveCorrectCreditCardFee()
{
Assert.AreEqual(2.95m, fees.CreditCardFee);
}
[Test]
public void ShouldHaveCorrectChangeFee()
{
Assert.AreEqual(59.95m, fees.ChangeFee);
}
}
}
There is more code in this test, but it provides much more value by showing me all the values that don't match expectations at once.
A colleague also pointed out that this is a bit like Scott Bellware's specunit.net: http://code.google.com/p/specunit-net/
Another practical disadvantage of very granular unit testing is that it breaks the DRY principle. I have worked on projects where the rule was that each public method of a class had to have a unit test (a [TestMethod]). Obviously this added some overhead every time you created a public method but the real problem was that it added some "friction" to refactoring.
It's similar to method level documentation, it's nice to have but it's another thing that has to be maintained and it makes changing a method signature or name a little more cumbersome and slows down "floss refactoring" (as described in "Refactoring Tools: Fitness for Purpose" by Emerson Murphy-Hill and Andrew P. Black. PDF, 1.3 MB).
Like most things in design, there is a trade-off that the phrase "a test should test only one thing" doesn't capture.
When a test fails, there are three options:
The implementation is broken and should be fixed.
The test is broken and should be fixed.
The test is not anymore needed and should be removed.
Fine-grained tests with descriptive names help the reader to know why the test was written, which in turn makes it easier to know which of the above options to choose. The name of the test should describe the behaviour which is being specified by the test - and only one behaviour per test - so that just by reading the names of the tests the reader will know what the system does. See this article for more information.
On the other hand, if one test is doing lots of different things and it has a non-descriptive name (such as tests named after methods in the implementation), then it will be very hard to find out the motivation behind the test, and it will be hard to know when and how to change the test.
Here is what a it can look like (with GoSpec), when each test tests only one thing:
func StackSpec(c gospec.Context) {
stack := NewStack()
c.Specify("An empty stack", func() {
c.Specify("is empty", func() {
c.Then(stack).Should.Be(stack.Empty())
})
c.Specify("After a push, the stack is no longer empty", func() {
stack.Push("foo")
c.Then(stack).ShouldNot.Be(stack.Empty())
})
})
c.Specify("When objects have been pushed onto a stack", func() {
stack.Push("one")
stack.Push("two")
c.Specify("the object pushed last is popped first", func() {
x := stack.Pop()
c.Then(x).Should.Equal("two")
})
c.Specify("the object pushed first is popped last", func() {
stack.Pop()
x := stack.Pop()
c.Then(x).Should.Equal("one")
})
c.Specify("After popping all objects, the stack is empty", func() {
stack.Pop()
stack.Pop()
c.Then(stack).Should.Be(stack.Empty())
})
})
}
The real question is why make a test or more for all methods as few tests that cover many methods is simpler.
Well, so that when some test fails you know which method fails.
When you have to repair a non-functioning car, it is easier when you know which part of the engine is failing.
An example: A list class. Why should I make separate tests for addition and removal? A one test that first adds then removes sounds simpler.
Let's suppose that the addition method is broken and does not add, and that the removal method is broken and does not remove. Your test would check that the list, after addition and removal, has the same size as initially. Your test would be in success. Although both of your methods would be broken.
Disclaimer: This is an answer highly influenced by the book "xUnit Test Patterns".
Testing only one thing at each test is one of the most basic principles that provides the following benefits:
Defect Localization: If a test fails, you immediately know why it failed (ideally without further troubleshooting, if you've done a good job with the assertions used).
Test as a specification: the tests are not only there as a safety net, but can easily be used as specification/documentation. For instance, a developer should be able to read the unit tests of a single component and understand the API/contract of it, without needing to read the implementation (leveraging the benefit of encapsulation).
Infeasibility of TDD: TDD is based on having small-sized chunks of functionality and completing progressive iterations of (write failing test, write code, verify test succeeds). This process get highly disrupted if a test has to verify multiple things.
Lack of side-effects: Somewhat related to the first one, but when a test verifies multiple things, it's more possible that it will be tied to other tests as well. So, these tests might need to have a shared test fixture, which means that one will be affected by the other one. So, eventually you might have a test failing, but in reality another test is the one that caused the failure, e.g. by changing the fixture data.
I can only see a single reason why you might benefit from having a test that verifies multiple things, but this should be seen as a code smell actually:
Performance optimisation: There are some cases, where your tests are not running only in memory, but are also dependent in persistent storage (e.g. databases). In some of these cases, having a test verify multiple things might help in decreasing the number of disk accesses, thus decreasing the execution time. However, unit tests should ideally be executable only in memory, so if you stumble upon such a case, you should re-consider whether you are going in the wrong path. All persistent dependencies should be replaced with mock objects in unit tests. End-to-end functionality should be covered by a different suite of integration tests. In this way, you do not need to care about execution time anymore, since integration tests are usually executed by build pipelines and not by developers, so a slightly higher execution time has almost no impact to the efficiency of the software development lifecycle.

Best way to Unit test non-existing Database

We have a build server which does not have a database running on its machine.
Now we want to make unit tests which also cover SQL-related tasks, for example writing mock-data and reading them again to verify the output (it gets manipulated by C# code in the process).
So the common approach would be to provide a connection-string to the remote server which actually runs a database.
This works, but it is unwanted because the database-server could be inaccessible when the build is triggered, and therefore the automatic tests would fail.
What is the best way to create a "pseudo-database" while running Unit tests?
Is this even possible without an actual database? Does it even make sense?
Use mocking framework like Moq, check here. That is the exact purpose of the Mocking frameworks that only a given component can be unit tested and any integration like database can be mocked to return the valid pre-defined result at the run-time and thus determine the working of given code / unit
The commonly accepted answer to "how do I unit test something with external dependencies" is to use a mocking layer.
However, that very quickly becomes unwieldy - you end up writing and maintaining a lot of code just to mimic your database, and it's not clear this code will pay for itself. For instance, if your SQL statements has a typo, your unit tests won't catch that.
Instead, it is much better to factor the code which manipulates the data out into a layer which can be unit-tested without database access.

using a test runner for web performance tests

We are writing an app in MVC/C# on VS2013.
We use selenium web driver for ui tests and have written a framework around it to take make the process more declarative and less procedural. Here is an example
[Test, Category("UITest")]
public void CanCreateMeasureSucceedWithOnlyRequiredFields()
{
ManageMeasureDto dto = new ManageMeasureDto()
{
Name = GetRandomString(10),
MeasureDataTypeId = MeasureDataTypeId.FreeText,
DefaultTarget = GetRandomString(10),
Description = GetRandomString(100)
};
new ManageScreenUITest<MeasureConfigurationController, ManageMeasureDto>(c => c.ManageMeasure(dto), WebDrivers).Execute(ManageScreenExpectedResult.SavedSuccessfullyMessage);
}
The Execute method takes the values in the DTO and calls selenium.sendKeys and a bunch of other methods to actually perform the test, submit the form, assert the response contains what we want.
We have happy with the results of this framework, but it occurs to me that something similar could also be used to create load testing scenarios.
IE, I could have another implementation of the UI test framework that used a HTTPWebRequest to issue the requests to the web server.
From there, I could essentially "gang up" my tests to create a scenario, eg:
public void MeasureTestScenario()
{
CanListMeasures();
CanSearchMeasures();
measureId = CanCreateMeasure();
CanEditMeasure(measureId);
}
The missing piece of the puzzle is how do I run these "scenarios" under load? IE, I'm after a test running that can execute 1..N of these tests of parallel, hopefully do things like Ramp up time, random waits in between each scenario, etc. Ideally the count of concurrent tests would be configurable at run time too, ie, spawn 5 more test threads. Reporting isn't super important because I can log execution times as part of the web request.
Maybe there is another approach too, eg, using C# to control a more traditional load testing tool?
The real advantage to this method is handling more complicated screens, ie for a screen that contains collections of objects. I can get a random parent record from the DB, use my existing automapper code to create the nested dto, have code to walk that dto changing random values then use my test framework to submit that dto's values as a web request. Much easier then hand coding JMeter scripts.
cheers,
dave
While using NUnit as runner for different tasks/tests is OK, I don't think it's the right tool for performance testing.
It uses reflection, it has overhead, it uses eventually the predefined settings for webclient, etc. It's much better if a performance testing tool is used. You can use NUnit to configure/start it :)
I disagree that NUnit isn't usable, with the appropriate logging any functional test runner can be used for performance testing. The overhead does not matter when the test code itself is providing data like when requests are sent and responses are received or when a page is requested and when it finishes loading. Parsing logs files and generating some basic metrics is a pretty trivial task IMO. If it becomes a huge project that changes things but more likely, I could get by with a command line app that takes an hour or two to write.
I think the real question is how much you can leverage by doing it with your normal test runner. If you already have a lot of logging in your test code and can easily invoke it then using NUnit become pretty appealing. If you don't have sufficient logging or a framework that would support some decent load scenarios then using some load testing tools/frameworks would probably be easier.

Applying CQRS - Is unit testing the thin read layer necessary?

Given that some of the advice for implementing CQRS advocates fairly close-to-the-metal query implementation, such as ADO.NET queries directly against the database (or perhaps a LINQ-based ORM), is it a mistake to try and unit test them?
I wonder if it's really even necessary?
My thoughts on the matter:
The additional architectural complexity to provide a mockable "Thin Read Layer" seems opposite to the very nature of the advice to keep the architectural ceremony to a minimum.
The number of unit tests to effectively cover every angle of query that a user might compose is horrendous.
Specifically I'm trying CQRS out in an ASP.NET MVC application and am wondering whether to bother unit testing my controller action methods, or just test the Domain Model instead.
Many thanks in advance.
In my experience 90%-99% of the reads you will be doing if you are creating a nice de-normalized read model DO NOT warrant having unit tests around them.
I have found that the most effective and efficient way to TDD a CQRS application is to write integration tests that push commands into your domain, then use the Queries to get the data back out of the DB for your assertions.
I would tend to agree with you that unit testing this kind of code is not so beneficial. But there is still some scope for some useful tests.
You must be performing some validation of the user's read query parameters, if so, then test that invalid request parameters throw a suitable exception, and valid parameters are allowed.
If you're using an ORM, I find the cost/benefit ratio too great for testing the mapping code. Assume your ORM is already tested, there could be errors in the mapping, but you'll soon find and fix them.
I also find it useful to write some Integration tests (using the same Testing framework), just to be sure I can make a connection to the database, and the ORM configuration doesn't throw any mapping exceptions. You certainly don't want to be writing unit tests that query the actual db.
As you probably already know unit testing is less about code coverage and preventing bugs than it is about good design. While I often skip testing the read-model event handlers when I'm in a hurry, there can be no doubt that it probably should be done for all the reasons code should be TDD'd.
I also have not be unit testing my HTTP actions (I don't have controllers per se since I'm using Nancy not .NET MVC).
These are integration points and don't tend to contain much logic since most of it is encapsulated in the command handlers and domain model.
The reason I think it is fairly easy not to test these is because they are very simple and very repetitive, there is almost no deep thinking in the denormalization of events to the read-model. The same is true for my HTTP handlers, which pretty much just process the request and issue a command to the domain, with some basic logic for return a response to the client.
When developing, I often make mistakes in this code and I probably would make far fewer of these mistakes if I was using TDD, but it would also take much longer and these mistakes tend to be very easy to spot and fix.
My gut tells me I should still apply TDD here though, it is still all very loosely coupled and it shouldn't be hard to write the tests. If you are finding it hard to write the tests that could indicate a code smell in your controllers.
The way that I have seen something like this unit tested is to have the unit test create a set of things in the database, you run your unit tests, then clean out the created things.
In one past job I saw this set up very nicely using a data structure to describe the objects and their relationships. This was run through the ORM to create those objects, with those relationships, data from that was used for the queries, and then the ORM was used to delete the objects. To make unit tests easier to set up every class specified default values to use in unit tests that didn't override those values. Then the data structure in the unit tests only needed to specify non-default values, which made the setup of the unit tests much more compact.
These unit tests were very useful, and caught a number of bugs in the database interaction.
In one my asp.net mvc application i've also applied sqrc. But instead of sql and 'ADO.NET queries' or 'Linq' we using document database(mongodb) and each command or event handler direct update database.
And i've created one test for one command/event handler. And after 100% unit testing i know that my domain work on 95% correct.
But for actions/controllers/ui i've applied ui tests(using selenium).
So seems both unit tests for the domain(commands/events handlers and direct updates to database) and ui tests it your 'integration tests'.
I think that you should test domain part at least, because all your logic incapsulated in command/event handlers.
FYI: I've also started developing domain part first from entity framework, than direct updates into database through stored procedures but was really happy with document database. I tried some different document databases but mongodb looks like best for me.

To achieve basic code coverage what tests should I run against an ASP.NET MVC Controller?

My controllers are utilizing StructureMap and AutoMapper. Presently, there is nothing exceptional with my routes. For the basic CRUD controller actions what tests should I be writing to ensure good code coverage?
It is difficult to say what you have to test in your controllers without showing them. You say they are basic CRUD but then you talk about AutoMapper, so probably it's not as simple as that. Here's an example of unit test I wrote and the controller being tested.
I have been doing the same thing recently. From my research it seems that best practice is to create at least one test for each action (That way you cover CRUD by virtue of the fact that your actions are generally based on CRUD) and limit the test to the internal code. What that means is don't cross the method boundary but rather mock out everything your action needs and assert the desired results. Of course this means that you need to do the same thing for your services, repositories etc
But it means that you have a unit test that won't break if you change some code between the action and the DB unless you have a change to make. It is time consuming but so far I have found the effort well worth it as a failed unit test in unrelated code means I have a change to make I forgot about or I am too tightly coupled.

Categories

Resources