I have 2 xunit tests I want to be ignored when triggering Run All Tests from VS, and to only run if the user ran it/them specifically.
I've tried using [Fact(Skip = "Long test, only run if needed, and run independently")] (or whatever message), however then it shows warnings, and the overall run's result is yellow like so, even though the rest passed:
I've found solutions on here that potentially allow this to be done via Resharper, however we do not have resharper available to us (I know... it sucks). I've also looked into SkippableFacts, but I believe these will lead me to the same result as in the above picture. Not to mention when you try to run it on it's own it always skips as well, and you need to change it to a regular [Fact]
Does anyone know of any possible way to ignore a test unless intentionally, specifically, and individually triggered? Any other paths to try would be really helpful, I'm stumped. Thanks!
In XUnit library you can use Fact or Theory (as the case may be) with the Skip attribute.
[Fact(Skip = "Reason")] or
[Theory(Skip ="Reason")]
This will skip the test and overall result should be green. It is working for me in my Asp.Net Core application.
Create playlists in VS (or 'Session' in Rider). One with the tests you always run, and a second one for the tests you only intend to run sporadically.
Related
I'm writing integration tests for Controller classes in my .NET Core WebAPI, and honestly, I'm not sure if Assert.Equal(HttpStatusCode.OK, response.StatusCode); is enough. I'm using my production Startup class with only database different (using InMemory database).
This is one of my tests, named ShouldAddUpdateAndDeleteUser. It's basically:
Send POST request with specific input.
Assert If POST worked. Because POST sends respond with created object, I can Assert on every property and if the Id is greater then 0.
Change the output a little bit and send Update request
Send GET request and assert if update worked.
Send DELETE request
Send GET request and assert if null.
Basically, I test, ADD,UPDATE,DELETE,GET (when item exists), GET (when item doesn't exists).
I have a few questions:
Is it a good practice to have such tests? It does do a lot, but it's not a unit test after all. If it fails, I can be pretty specific and specify which part didn't work. Worst case scenario I can debug it pretty quickly.
Is it integration tests or functional test or neither?
If this is wrong, how can I test DELETE or UPDATE? I'm kinda forced to call GET request after them (They return NoContent)
(Side note: It's not the only test I have for that controller obviously. I also have tests for GET all as well as BedRequest requests)
Is it a good practice to have such tests? It does do a lot, but it's not a unit test after all. If it fails, I can be pretty specific and specify which part didn't work. Worst case scenario I can debug it pretty quickly.
Testing your endpoints full stack does have value but make sure to keep in mind the testing pyramid. Its valuable to have a small number of full stack integration to validate the most important pathways/use cases in your application. Having too many however, can add a lot of extra time to your test runs which could impact release times if those tests are tied to releasing (which they should be).
Is it integration tests or functional test or neither?
Without seeing more of the code I would guess it's probably a bit of both. Functionally, you want to test the output of the endpoints and it seems as though you're testing some level of component integration if you're mocking out data in memory.
If this is wrong, how can I test DELETE or UPDATE? I'm kinda forced to call GET request after them (They return NoContent)
Like I said earlier, a few full stack tests are fine in my opinion as long as they provide value for major workflows in your application. I would suggest one to two integration tests that make database calls that make sure your application can stand up full stack but not bundle that requirement for your functional testing. In my experience you get way more value from modular component testing and a couple end to end functional tests.
I think the test flow you mentioned makes sense. I've done similar patterns throughout my services at work. Sometimes you can't realistically test one component without testing another like your example about getting after a delete.
How can I order my scenarios to run in a given order?
I have a series of scenarios that depend on the previous scenario having run.
I'm running my specflow tests using nunit3console.
I haven't found anything online that seems to work.
And yes my tests do need to run in a particular order otherwise it's pointless.
Scenario: I perform first scenario
Scenario: I perform second scenario
NUnit by default runs tests in alphabhetical order. As of NUnit 3 you can try running these in order, reference here.
It is not considered good practice to limit scenarios to run in a particular order though as each and every one of them sould be able to run independently of one another.
Other posts with similar information, can be found here and here.
Right now my Coded UI Tests use their app.config to determine the domain they execute in, which has a 1-1 relationship with environment. To simplify it:
www.test.com
www.UAT.com
www.prod.com
and in App.config I have something like:
<configuration>
<appSettings>
<add key="EnvironmentURLMod" value ="test"/>
and to run the test in a different environment, I manually change the value between runs. For instance the I open the browser like this:
browserWindow.NavigateToUrl(new Uri("http://www."
+ ConfigurationManager.AppSettings.Get("EnvironmentURLMod")
+ ".com"));
Clearly this is inelegant. I suppose I had a vision where we'd drop in a new app.config for each run, but as a spoiler this test will be run in ~10 environments, not 3, and which environments it may run may change.
I know I could decouple these environment URL modifications to yet another XML file, and make the tests access them sequentially in a data-driven scenario. But even this seems like it's not quite what I need, since if one environment fails then the whole test collapses. I've seen Environment Variables as a suggestion, but this would require creating a test agent for each environment, modifying their registries, and running the tests on each of them. If that's what it takes then sure, but it seems like an enormous amount of VM bandwidth to be used for what's a collection of strings.
In an ideal world, I would like to tie these URL mods to something like Test Settings, MTM environments, or builds. I want to execute the suite of tests for each domain and report separately.
In short, what's the best way to parameterize these tests? Is there a way that doesn't involve queuing new builds, or dropping config files? Is Data Driven Testing the answer? Have I structured my solution incorrectly? This seems like it should be such a common scenario, yet my googling doesn't quite get me there.
Any and all help appreciated.
The answer here is data driven testing, and unfortunately there's no total silver bullet even if there's a "Better than most" option.
Using any data source lets you iterate through a test in multiple environments (or any other variable you can think of) and essentially return 3 different test results - one for each permutation or data row. However you'll have to update your assertions to show which environment you're currently executing in, as the test results only show "Data Row 0" or something similar by default. If the test passes, you'll get no clue as to what's actually in the data row for the successful run, unless you embed this information in the action log! I'm lucky that my use case does this automatically since I'm just using a URL mod, but other people may need to do that on their own.
To allow on-the-fly changing of what environments we're testing in, we chose to use a TestCase data source. This has a lot of flexibility - potentially more than using a database or XML for instance - but it comes with its own downsides. Like all data driven scenarios, you have to essentially "Hard Code" the test case ID into the decorator above your test method (Because it's considered a property). I was hoping we could drop an app.config into the build drop location when we wanted to change which test case we used, at least, but it looks like instead we're going to have to do a find + replace across a solution instead.
If anyone knows of a better way to decouple the test ID or any other part of the connection string from the code, I'll give you an answer here. For anyone else, you can find more information on MSDN.
We are using Moq to perform some unit tests, and there's some strange behaviour, perhaps it's some configuration issue that i'm missing.
basically, i have 2 tests (which call a windows workflow, which calls custom activities that call a method using Invoke. i don't know if this helps but i want to give as much info as i can). The tests run OK when executed alone, but if I execute them in a same run, the first one passes and the second fails (doesn't matter if i change the order of them, the 2nd one always fails)
The mock is recreated every time, loaded using Unity. ex :
MockProcessConfigurator = MockFactory.Create<IProcessConfigurator>();
MockProcessConfigurator.Setup(x => x.MyMethod(It.IsAny<Order>()));
[...]
InversionOfControl.Instance.Register<IProcessConfigurator>(MockProcessConfigurator .Object)
The invoked call (WF custom activity) is
var invoker = new WorkflowInvoker(new MyWorkflow());
invoker.Invoke(inputParameter);
The call (Invoked call) is
MyModuleService.ProcessConfigurator.MyMethod(inputOrder);
when debugging, i see that the ProcessConfigurator is always mocked.
The call that fails in the test is something as simple as this :
MockEnvironment.MockProcessConfigurator.Verify(x => x.MyMethod(It.IsAny<Order>()), Times.Exactly(1));
When debugging, the method is actually called everytime, so i suspect that there's something worng with the mock instance. I'm a bit lost here, because things seem to be implemented correctly, but for some reason when they're run one after the other, there's a problem
This type of error commonly occurs when the two tests share something.
For example, you set up your mock with an expectation that a method will be called 1 time in your test setup, and then two tests each call that method 1 time - your expectation will fail because it has now been called 2 times.
This suggests that you should be moving the set up of expectations into each test.
A generic troubleshooting for this type of problem is to try to isolate the dependency between the two test.
Move any setup-code to inside the tests.
Move any tear-down code to inside the tests.
Move any field initializers to inside the tests. (Those are only run once per fixture!)
This should make both your test pass when run together. When you got the green-lights, you can start moving out duplicated stuff again to initializers/setup one piece at the time, running the tests after each change you make.
You should be able to learn what is causing this coupling between the tests. Good luck!
Thought I'd add an additional situation and solution I just came across:
If you are running tests from two separate projects at the same time and each project is using a different version of Moq, this same problem can happen.
For me, I had TestProjectA using Moq 4.2.14 and TestProjectB using Moq 4.2.15 on accident (courtesy of Nuget). When running a test from A and a test from B simultaneously, the first test succeeded and the second silently failed.
Adjusting both projects to use the same version solved the issue.
To expand on the answer Sohnee gave, I would check my setup/teardown methods to make sure you're tidying everything up properly. I've had similar issues when running tests in bulk and not having tidied up my mock objects.
I don't know if this is relevant, but MockFactory.Create seems odd. I normally create mocks as follows:
var mockProcessConfigurator = new Mock<IProcessConfigurator>();
When using a MockFactory (which I never have needed), you would normally create an instance of it. From the moq QuickStart:
var factory = new MockFactory(MockBehavior.Strict) { DefaultValue = DefaultValue.Mock };
Calling a static method on MockFactory seems to defeat the purpose. If you have a nonstandard naming convention where MockFactory is actually a variable of type MockFactory, that's probably not your issue (but will be a constant source of confusion). If MockFactory is a property of your test class, insure that it is recreated on SetUp.
If possible I would eliminate the factory, as it is a potential source of shared state.
EDIT: As an aside, WorkflowInvoker.Invoke takes an Activity as a parameter. Rather than creating an entire workflow to test a custom activity, you can just pass an instance of the custom activity. If that's what you want to test, it keeps your unit test more focused.
I thought it will be a common question so I searched for a while but couldn't find it.
I am about to start a new project (C#, .net 3.5) and I was thinking about where I should I write the unit test code. I can create a unit test project and write all code there, or I can write the unit test code with the "class under test" itself.
What do you recommend and why? Things to consider before choosing an approach (caveats?)?
EDIT: About writing unit-test code with "code under test": Removing the test code from production assembly isn't difficult I guess. Thats what conditional compilation is for. Right?
Just throwing this point because answers are rejecting the second option just because production assemblies would be fatty.
Separate project, same solution. Use InternalsVisibleTo if you want to access internals from the test code.
Separating out test from production code:
makes it more obvious what's what
means you don't need dependencies on test frameworks in your production project
keeps your deployed code leaner
avoids including test data files in your deployment assembly
Keeping the code in the same solution:
keeps the test cycle quick
makes it easy to hop between production and test code
I always create a separate project in where I write my TestFixtures.
I do not want to litter my domain model (or whatever) with Test classes.
I do not want to distribute my tests to customers or users of my application, so therefore I put them in a separate project (which also leads to a separate assembly).
If you have the rare case that you want to test internal methods, you can use InternalsVisibleTo.
If you have the very rare case that you want to test private methods, you can use this technique, as explained by Davy Brion.
I prefer the first approach - separating to unit test to its own project.
placing the unit tests within the test subject will make it dirty. furthermore, you don't necessarily want to distribute your project with the unit tests which will make your dll's bigger and possibly expose things that you don't want to expose to the end user.
most of the open source projects that I saw had a different projects for unit tests.
You shoul place the unit tests in a seperate project.
You should also write them in a way, so that the SUT (System under Test) is not modified in a way to make unittests possible. I mean you should have no helper classes in you main project that exist "only" to support you tests.
Mixing test and production code is allways a bad plan, since you dont want to deliver all that extra code out to your clients. Keep the clear separation that another project offers.
I dont think the "keep the tests quick" argument is a really strong one. Make a clear cut... Testing code does not belong into a production enviroment IMHO...
Edit:
Comment on Edit above:
EDIT: About writing unit-test code with "code under test": Removing the test code from production assembly isn't difficult I guess. Thats what conditional compilation is for. Right?
Yes, it is "easy" to remove the code with a conditional compilation flag, but you wont have tested the final assembly you created, you only tested the assembly you created with the code inside it, then you recompile, creating a new,untested assembly and ship that one. Are you sure all your conditional flags are set 100% correct? I guess not, since you cant run the tests ;)