using a test runner for web performance tests - c#

We are writing an app in MVC/C# on VS2013.
We use selenium web driver for ui tests and have written a framework around it to take make the process more declarative and less procedural. Here is an example
[Test, Category("UITest")]
public void CanCreateMeasureSucceedWithOnlyRequiredFields()
{
ManageMeasureDto dto = new ManageMeasureDto()
{
Name = GetRandomString(10),
MeasureDataTypeId = MeasureDataTypeId.FreeText,
DefaultTarget = GetRandomString(10),
Description = GetRandomString(100)
};
new ManageScreenUITest<MeasureConfigurationController, ManageMeasureDto>(c => c.ManageMeasure(dto), WebDrivers).Execute(ManageScreenExpectedResult.SavedSuccessfullyMessage);
}
The Execute method takes the values in the DTO and calls selenium.sendKeys and a bunch of other methods to actually perform the test, submit the form, assert the response contains what we want.
We have happy with the results of this framework, but it occurs to me that something similar could also be used to create load testing scenarios.
IE, I could have another implementation of the UI test framework that used a HTTPWebRequest to issue the requests to the web server.
From there, I could essentially "gang up" my tests to create a scenario, eg:
public void MeasureTestScenario()
{
CanListMeasures();
CanSearchMeasures();
measureId = CanCreateMeasure();
CanEditMeasure(measureId);
}
The missing piece of the puzzle is how do I run these "scenarios" under load? IE, I'm after a test running that can execute 1..N of these tests of parallel, hopefully do things like Ramp up time, random waits in between each scenario, etc. Ideally the count of concurrent tests would be configurable at run time too, ie, spawn 5 more test threads. Reporting isn't super important because I can log execution times as part of the web request.
Maybe there is another approach too, eg, using C# to control a more traditional load testing tool?
The real advantage to this method is handling more complicated screens, ie for a screen that contains collections of objects. I can get a random parent record from the DB, use my existing automapper code to create the nested dto, have code to walk that dto changing random values then use my test framework to submit that dto's values as a web request. Much easier then hand coding JMeter scripts.
cheers,
dave

While using NUnit as runner for different tasks/tests is OK, I don't think it's the right tool for performance testing.
It uses reflection, it has overhead, it uses eventually the predefined settings for webclient, etc. It's much better if a performance testing tool is used. You can use NUnit to configure/start it :)

I disagree that NUnit isn't usable, with the appropriate logging any functional test runner can be used for performance testing. The overhead does not matter when the test code itself is providing data like when requests are sent and responses are received or when a page is requested and when it finishes loading. Parsing logs files and generating some basic metrics is a pretty trivial task IMO. If it becomes a huge project that changes things but more likely, I could get by with a command line app that takes an hour or two to write.
I think the real question is how much you can leverage by doing it with your normal test runner. If you already have a lot of logging in your test code and can easily invoke it then using NUnit become pretty appealing. If you don't have sufficient logging or a framework that would support some decent load scenarios then using some load testing tools/frameworks would probably be easier.

Related

How should I assert in WebAPI integration test?

I'm writing integration tests for Controller classes in my .NET Core WebAPI, and honestly, I'm not sure if Assert.Equal(HttpStatusCode.OK, response.StatusCode); is enough. I'm using my production Startup class with only database different (using InMemory database).
This is one of my tests, named ShouldAddUpdateAndDeleteUser. It's basically:
Send POST request with specific input.
Assert If POST worked. Because POST sends respond with created object, I can Assert on every property and if the Id is greater then 0.
Change the output a little bit and send Update request
Send GET request and assert if update worked.
Send DELETE request
Send GET request and assert if null.
Basically, I test, ADD,UPDATE,DELETE,GET (when item exists), GET (when item doesn't exists).
I have a few questions:
Is it a good practice to have such tests? It does do a lot, but it's not a unit test after all. If it fails, I can be pretty specific and specify which part didn't work. Worst case scenario I can debug it pretty quickly.
Is it integration tests or functional test or neither?
If this is wrong, how can I test DELETE or UPDATE? I'm kinda forced to call GET request after them (They return NoContent)
(Side note: It's not the only test I have for that controller obviously. I also have tests for GET all as well as BedRequest requests)
Is it a good practice to have such tests? It does do a lot, but it's not a unit test after all. If it fails, I can be pretty specific and specify which part didn't work. Worst case scenario I can debug it pretty quickly.
Testing your endpoints full stack does have value but make sure to keep in mind the testing pyramid. Its valuable to have a small number of full stack integration to validate the most important pathways/use cases in your application. Having too many however, can add a lot of extra time to your test runs which could impact release times if those tests are tied to releasing (which they should be).
Is it integration tests or functional test or neither?
Without seeing more of the code I would guess it's probably a bit of both. Functionally, you want to test the output of the endpoints and it seems as though you're testing some level of component integration if you're mocking out data in memory.
If this is wrong, how can I test DELETE or UPDATE? I'm kinda forced to call GET request after them (They return NoContent)
Like I said earlier, a few full stack tests are fine in my opinion as long as they provide value for major workflows in your application. I would suggest one to two integration tests that make database calls that make sure your application can stand up full stack but not bundle that requirement for your functional testing. In my experience you get way more value from modular component testing and a couple end to end functional tests.
I think the test flow you mentioned makes sense. I've done similar patterns throughout my services at work. Sometimes you can't realistically test one component without testing another like your example about getting after a delete.

Saving list data (or data items) manually for future use

I am debugging a .net(C#) app in VS2013. I am calling a rest service and it returns data as a list of phone calls that have been made between two given dates.
So the structure of List is something like CallDetails.Calls and then Calls property has several child properties like, call duration, timeofcall, priceperminute etc etc.
Since I am debugging the app, I want to avoid every time hitting the server where the rest service is hosted.
So my question is simply that if there is a way that when I have received the List of data items, I (kind of) copy and paste data into a file and later use it in a statically defined List instead of fetching it from the server?
In case someone wonders why would I want to do that, there is some caching of all incoming requests on the server and after a while it gets full so the server does not return data and ultimately a time out error occurs.
I know that should be solved on the server some how but that is not possible today that is why I am putting this question.
Thanks
You could create a unit test using MSTest or NUnit. I know unit tests are scary if you haven't used them before, but for simple automated tests they are awesome. You don't have to worry about lots of the stuff people talk about for "good unit testing" in order to get started testing this one item. Once you get the list from in the debugger while testing, you could then
save it out to a text file,
manually (one time) build the code to copy it from the text file,
Use that code as the set-up for your test.
MSTest tutorial: https://msdn.microsoft.com/en-us/library/ms182524%28v=vs.90%29.aspx
NUnit is generally considered superior to MSTest, and in particular has a feature that is better for running a set of data through the same test. Based on what you've said, I don't think you need the NUnit test case, but I've never used it. If you do want it, after the quickstart guide, the keyword to look for is testcase if you want to go that route. http://nunitasp.sourceforge.net/quickstart.html

TDD on a configuration tool touching database

I am working on writing a tool which
- sets up a connection to Sql and runs a series of stored procedures
- Hits the file system to verify and also delete files
- Talks to other subsystems through exposed APIs
I am new to the concept of TDD but have been doing a lot of reading on it. I wanted apply TDD for this development but I am stuck. There are a lot of interactions with external systems which need to be mocked/stubbed or faked. What I am finding difficult is the proper approach to take in doing this in TDD.. here is a sample of what I would like accomplished.
public class MyConfigurator
{
public static void Start()
{
CheckSystemIsLicenced(); // will throw if its not licenced. Makes call to a library owned by company
CleanUpFiles(); // clean up several directories
CheckConnectionToSql(); //ensure connection to sql can be made
ConfigureSystemToolsOnDatabase(); //runs a set of stored procedure. Range of checks are also implemented and will throw if something goes wrong.
}
}
After this I have another class which cleans up the system if things have gone wrong. For the purpose of this question, its not that relevant but it essentially will just clear certain tables and fix up database so that the tool can run again from scratch to do its configuration tasks.
It almost appears to be here that when using TDD the only tests I end up having are things like (assuming I am using FakeItEasy)
A.CallTo(()=>fakeLicenceChecker.CheckSystemIsLicenced("lickey")).MustHaveHappened();
It just is a whole lot of tests which just appear to be "MustHaveHappened". Am I doing something wrong? Is there a different way to start this project using TDD? Or is this a particular scenario where perhaps TDD is not really recommended? Any guidance would be greatly appreciated.
In your example, if the arrangement of the unit test shows lickey as the input, then it is reasonable to assert that the endpoint has been called with the proper value. In more complex scenarios, the input-to-assert flow covers more subsystems so that the test itself doesn't seem as trivial. You might set up an ID value as input and test that down the line you are outputting a value for an object that is deterministically related to the input ID.
One aspect of TDD is that the code changes while the tests do not - except for functionally equivalent refactoring. So your first tests would naturally arrange and assert data at the outermost endpoints. You would start with a test that writes a real file to the filesystem, calls your code, and then checks to see that the file is deleted as expected. Of course, the file system is a messy workspace for portable testing, so you might decide early on to abstract the file system by one step. Ditto with the database by using EF and mocking your DbContext or by using a mocked repository pattern. These abstractions can be pre-TDD application architecture decisions.
Something I do frequently is to use utility code that starts with an IFileSystem interface that declares methods that mimic a lot of what is available in System.IO.File. In production I use an implementation of IFileSystem that just passes through to File.XXX() methods. Then you can mock up and verify the interface instead of trying to setup and cleanup real files.
In this particular method the only thing you can test is that the methods were called. It's ok to do what you are doing by asserting the mock classes. It's up to you to determine if this particular test is valuable or not. TDD assumes tests for everything, but I find it to be more practical to focus your testing on scenarios where it adds value. Hard for others to make that determination, but you should trust yourself to make the call in each specific scenario.
I think integration tests would add the most bang for buck. Use the real DB and FileSystem.
If you have complex logic in the tool, then you may want to restructure the tool design to abstract out the DB and fileSystem and write the unit tests with mocks. From the code snippet you posted, it looks like a simple script to me.

How to run specflow tests on a build Server?

How do I run specflow tests using Nunit 2.6.1 on a build Server?
Also how do you maintainin and organize these tests to run successfully on the build server with multiple automation programmers coding seperate tests?
I will try to answer the second part of your question for now:
"how do you maintainin and organize these tests to run successfully on the build server with multiple automation programmers coding seperate tests?"
These kind of tests have a huge potential of being turned into a unmaintainable mess and here is why:
steps are used to create contexts (for the scenario test)
steps can be called from any scenario in any order
Due to these two simple facts, its easy to compare this model with procedural/structured programming. The scenario context is no different from some global variables and the steps are some methods that can be called at any time at any place.
What my team does to avoid a huge mess in the step files is to keep them as stupid as possible. Everything a step does is to parse and call services that will do the real meaningful job and hold the context of the current test. We call these services as 'xxxxDriver' (where xxxx is the domain object that we are dealing with).
a stupid example:
[Given("a customer named (.*)")]
public void GivenACustomer(string customerName)
{
_customerDriver.CreateCustomer(customerName);
}
[Given("an empty schedule for the customer (.*)")]
public void GivenEmptySchedule(string customerName)
{
var customer = _customerDriver.GetCustomer(customerName);
_scheduleDriver.CreateForCustomer(customer);
}
The 'xxxxxDriver' will contain all repositories, webgateways, stubs, mocks or anything related with the related domain object. Another important detail is that if you inject these drivers in the step files, specflow will create an instance of it for each scenario and use it among all step files.
This was the best way we found to keep some consistency in the way we maintain and extend the steps without stepping in each other toes with a big team touching the same codebase. Another huge advantage is that it help us to find similar steps navigating through the usages of the methods in driver classes.
There's a clear example in the specflow codebase itself. (look at the drivers folder)
https://github.com/techtalk/SpecFlow/tree/master/Tests/TechTalk.SpecFlow.Specs

Applying CQRS - Is unit testing the thin read layer necessary?

Given that some of the advice for implementing CQRS advocates fairly close-to-the-metal query implementation, such as ADO.NET queries directly against the database (or perhaps a LINQ-based ORM), is it a mistake to try and unit test them?
I wonder if it's really even necessary?
My thoughts on the matter:
The additional architectural complexity to provide a mockable "Thin Read Layer" seems opposite to the very nature of the advice to keep the architectural ceremony to a minimum.
The number of unit tests to effectively cover every angle of query that a user might compose is horrendous.
Specifically I'm trying CQRS out in an ASP.NET MVC application and am wondering whether to bother unit testing my controller action methods, or just test the Domain Model instead.
Many thanks in advance.
In my experience 90%-99% of the reads you will be doing if you are creating a nice de-normalized read model DO NOT warrant having unit tests around them.
I have found that the most effective and efficient way to TDD a CQRS application is to write integration tests that push commands into your domain, then use the Queries to get the data back out of the DB for your assertions.
I would tend to agree with you that unit testing this kind of code is not so beneficial. But there is still some scope for some useful tests.
You must be performing some validation of the user's read query parameters, if so, then test that invalid request parameters throw a suitable exception, and valid parameters are allowed.
If you're using an ORM, I find the cost/benefit ratio too great for testing the mapping code. Assume your ORM is already tested, there could be errors in the mapping, but you'll soon find and fix them.
I also find it useful to write some Integration tests (using the same Testing framework), just to be sure I can make a connection to the database, and the ORM configuration doesn't throw any mapping exceptions. You certainly don't want to be writing unit tests that query the actual db.
As you probably already know unit testing is less about code coverage and preventing bugs than it is about good design. While I often skip testing the read-model event handlers when I'm in a hurry, there can be no doubt that it probably should be done for all the reasons code should be TDD'd.
I also have not be unit testing my HTTP actions (I don't have controllers per se since I'm using Nancy not .NET MVC).
These are integration points and don't tend to contain much logic since most of it is encapsulated in the command handlers and domain model.
The reason I think it is fairly easy not to test these is because they are very simple and very repetitive, there is almost no deep thinking in the denormalization of events to the read-model. The same is true for my HTTP handlers, which pretty much just process the request and issue a command to the domain, with some basic logic for return a response to the client.
When developing, I often make mistakes in this code and I probably would make far fewer of these mistakes if I was using TDD, but it would also take much longer and these mistakes tend to be very easy to spot and fix.
My gut tells me I should still apply TDD here though, it is still all very loosely coupled and it shouldn't be hard to write the tests. If you are finding it hard to write the tests that could indicate a code smell in your controllers.
The way that I have seen something like this unit tested is to have the unit test create a set of things in the database, you run your unit tests, then clean out the created things.
In one past job I saw this set up very nicely using a data structure to describe the objects and their relationships. This was run through the ORM to create those objects, with those relationships, data from that was used for the queries, and then the ORM was used to delete the objects. To make unit tests easier to set up every class specified default values to use in unit tests that didn't override those values. Then the data structure in the unit tests only needed to specify non-default values, which made the setup of the unit tests much more compact.
These unit tests were very useful, and caught a number of bugs in the database interaction.
In one my asp.net mvc application i've also applied sqrc. But instead of sql and 'ADO.NET queries' or 'Linq' we using document database(mongodb) and each command or event handler direct update database.
And i've created one test for one command/event handler. And after 100% unit testing i know that my domain work on 95% correct.
But for actions/controllers/ui i've applied ui tests(using selenium).
So seems both unit tests for the domain(commands/events handlers and direct updates to database) and ui tests it your 'integration tests'.
I think that you should test domain part at least, because all your logic incapsulated in command/event handlers.
FYI: I've also started developing domain part first from entity framework, than direct updates into database through stored procedures but was really happy with document database. I tried some different document databases but mongodb looks like best for me.

Categories

Resources