I'm implementing a project using .net-core and microservices base on eShopOnContainers. In my functional test I previously put all the tests in one class called IntegrationEventsScenarios.cs and now I need to separate tests by their related services. But when I run Basket Catalog and Ordering test scenarios together tests will not pass.
I have tried putting for example basket Scenariobase and Teststarup classes in Ordering folder to remove any shared resource but it didn't work.
Related
I have a selenium project written with NUnit in C# .NET 6. I have a folder called 'Tests' where there are multiple sub folders and each folder has a lot of classes. Each class has only one Test method. The reason for this is for structuring the project and each class represents one process in the software I'm testing. However, some processes need to be run after some other processes have already ran.
My question is; is there any way to run the classes in a specific order I want? I have tried using
dotnet test --filter
However this did not work. I also tried using NUnit's
Order
attribute but this works only when a class has multiple test methods.
The Order attribute may be placed on a class or a method. From the NUnit docs:
The OrderAttribute may be placed on a test method or fixture to specify the order in which tests are run within the fixture or other suite in which they are contained.
The bold italics in the citation are mine. In your case, the "other suite" containing the fixture class is the namespace in which it is defined.
There is no "global" ordering facility, but if all your tests are in the same namespace, using the OrderAttribute on the fixtures will cause them to run in the order you specify. If it doesn't interfere with any other use of the namespaces you might consider putting them all in one namespace.
A couple of notes:
The OrderAttribute specifies the order in which the tests start. If you run tests in parallel, multiple tests may run at the same time.
It's not advisable to have the tests depend on one another in most cases.
There are lots of reasons not to control the order of tests, which are covered in the answers quoted by other folks. I'm just answering the specific "how-to" question you posed.
I am trying to create a SetUpFixture class that runs a OneTimeSetup in which I instantiate multiple other entities from other assemblies (a logger, a DB provider, etc.) and I want to use this OneTimeSetup every time I start a new test suite.
My tests are scattered across multiple assemblies as well, with their own base setUp and tearDown,
but they have common attribute tags which I want to use in order to trigger my run.
Is there a way to specify my SetUpFixture for the whole test run, regardless of where the tests are coming from, so I can just run all the tests with a specific Attribute tag?
Or is there a way that I can use a "run id" and pass it from one assembly to another?
My solution has many projects. Most projects in the solution will have associated unit test projects. I have created a UnitTestCommon project to act as a stand alone assembly with a local database and setup methods to refresh its data by parsing xml records into the tables. Also others like, test the db connection and expose public properties for the connection string and others needed to interact with the db.
I want to accomplish a self instantiated UnitTestCommon project to be references by all other associated unit test projects.
I know if you add the attribute [AssemblyFixture] for the "global" class and [FixtureSetup] it will run before all other tests. But because I want to separate my unit test projects this will not self instantiate when referencing in the associated unit tests.
I could always call an Initialize method in the UnitTestCommon, but...there has to be a better way.
I am using Visual Studio 2008 and I would like to be able to split up my unit tests into two groups:
Quick tests
Longer tests (i.e. interactions with database)
I can only see an option to run all or one, and also to run all of the tests in a unit test class.
Is there any way I can split these up or specify which tests to run when I want to run a quick test?
Thanks
If you're using NUnit, you could use the CategoryAttribute.
The equivalent in MSTest is the TestCategory attribute - see here for a description of how to use it.
I would distinguish your unit test groups as follows:
Unit Tests - Testing single methods / classes, with stubbed dependenices. Should be very quick to execute, as there are only internal dependencies.
Integration Tests - Testing two or more components together, such as your Data Access classes with an actual backed database. These are generally lengthy, as you may be dealing with an external dependency such as a DB or web service. However, these could still be quick tests depending on what components you are integrating. The key here is the scope of the test is different than Unit Tests.
I would create seperate test libraries, i.e. MyProj.UniTests.dll and MyProj.IntegrationTests.dll. This way, your Unit Tests library would have fewer dependenices than your Integration tests. It will then be easy to specify which test group you want to run.
You can set up a continuous integration server, if you are using something like that, to run the tests at different times, knowing that group 1 is quicker than the second. For example, Unit Tests could run immediatley after code is checked in to your repository, and Integration Tests could run overnight. It's easy to set something like this up using Team City
There is the Test List Editor. I'm not at my Visual Studio computer now so I'll just point to this answer.
I am approaching database testing with NUnit. As its time consuming so I don't want to run everytime.
So, I tried creating the base class and every other database testing classes derive from it as I thought if I will decorate the base class with [Ignore] attribute then rest of the derived classes will get ignored, but thats not happening.
I need to know is there any way to Ignore set of the classes with minimal effort?
If you don't want to split out integration and unit tests into separate projects you can also group tests into categories
[Test, Category("Integration")]
Most test runners allow you to filter which categories to run which would give you finer grained control if you need it (e.g. 'quick', 'slow' and 'reaaallyy slow' categories)
A recommended approach is seperating your unit tests that can run in isolation from your integration tests into different projects, then you can choose which project to execute when you run your tests. This will make it easier to run your faster running tests more often, multiple times daily or even hourly (and hopefully without ever having to worry about such things as configuration), while letting your slower running integration tests run on a different schedule.