MSTest Reused codedUI - c#

I have a series of MS Unit tests in a class that I have created called Forename. The all run and pass sucessfully to test a variety of inputs e.g 100 chars max etc
I am now looking at getting CodedUI to find the forename and execute these test. I have managed to do this for one specific web page that has the forename control. I have managed to get this all working and pass.
I have now introduced a second page and I want to re-use the same set of test methods, and not have to repeat the code. I can defind and interface and implement and extract some methods to allow some re-use. I have tried to use inheritace, but am struggling and need some guidance if this is possible with MS Test.
Ideally I want to navigate to the page and run the forename tests. I then want to go to the second page and execute the extact same test.
All help appreciated.

It looks like you might want some guidance on how to set up CodedUI tests in such a way that they become maintainable. I can recommend you look at a concept called the PageObject pattern.
Page Object pattern
You can do this with either the UIMap files you record, but personaly I like a code First approach better. You can find more details on writing codedUI without UI Map files here
It not only describes how you can map your web application to page objects, it also describes a way to build build out a Fluent API that makes your tests very easy to read and better maintainable.
Hope that helps.

Related

Saving list data (or data items) manually for future use

I am debugging a .net(C#) app in VS2013. I am calling a rest service and it returns data as a list of phone calls that have been made between two given dates.
So the structure of List is something like CallDetails.Calls and then Calls property has several child properties like, call duration, timeofcall, priceperminute etc etc.
Since I am debugging the app, I want to avoid every time hitting the server where the rest service is hosted.
So my question is simply that if there is a way that when I have received the List of data items, I (kind of) copy and paste data into a file and later use it in a statically defined List instead of fetching it from the server?
In case someone wonders why would I want to do that, there is some caching of all incoming requests on the server and after a while it gets full so the server does not return data and ultimately a time out error occurs.
I know that should be solved on the server some how but that is not possible today that is why I am putting this question.
Thanks
You could create a unit test using MSTest or NUnit. I know unit tests are scary if you haven't used them before, but for simple automated tests they are awesome. You don't have to worry about lots of the stuff people talk about for "good unit testing" in order to get started testing this one item. Once you get the list from in the debugger while testing, you could then
save it out to a text file,
manually (one time) build the code to copy it from the text file,
Use that code as the set-up for your test.
MSTest tutorial: https://msdn.microsoft.com/en-us/library/ms182524%28v=vs.90%29.aspx
NUnit is generally considered superior to MSTest, and in particular has a feature that is better for running a set of data through the same test. Based on what you've said, I don't think you need the NUnit test case, but I've never used it. If you do want it, after the quickstart guide, the keyword to look for is testcase if you want to go that route. http://nunitasp.sourceforge.net/quickstart.html

TDD on a configuration tool touching database

I am working on writing a tool which
- sets up a connection to Sql and runs a series of stored procedures
- Hits the file system to verify and also delete files
- Talks to other subsystems through exposed APIs
I am new to the concept of TDD but have been doing a lot of reading on it. I wanted apply TDD for this development but I am stuck. There are a lot of interactions with external systems which need to be mocked/stubbed or faked. What I am finding difficult is the proper approach to take in doing this in TDD.. here is a sample of what I would like accomplished.
public class MyConfigurator
{
public static void Start()
{
CheckSystemIsLicenced(); // will throw if its not licenced. Makes call to a library owned by company
CleanUpFiles(); // clean up several directories
CheckConnectionToSql(); //ensure connection to sql can be made
ConfigureSystemToolsOnDatabase(); //runs a set of stored procedure. Range of checks are also implemented and will throw if something goes wrong.
}
}
After this I have another class which cleans up the system if things have gone wrong. For the purpose of this question, its not that relevant but it essentially will just clear certain tables and fix up database so that the tool can run again from scratch to do its configuration tasks.
It almost appears to be here that when using TDD the only tests I end up having are things like (assuming I am using FakeItEasy)
A.CallTo(()=>fakeLicenceChecker.CheckSystemIsLicenced("lickey")).MustHaveHappened();
It just is a whole lot of tests which just appear to be "MustHaveHappened". Am I doing something wrong? Is there a different way to start this project using TDD? Or is this a particular scenario where perhaps TDD is not really recommended? Any guidance would be greatly appreciated.
In your example, if the arrangement of the unit test shows lickey as the input, then it is reasonable to assert that the endpoint has been called with the proper value. In more complex scenarios, the input-to-assert flow covers more subsystems so that the test itself doesn't seem as trivial. You might set up an ID value as input and test that down the line you are outputting a value for an object that is deterministically related to the input ID.
One aspect of TDD is that the code changes while the tests do not - except for functionally equivalent refactoring. So your first tests would naturally arrange and assert data at the outermost endpoints. You would start with a test that writes a real file to the filesystem, calls your code, and then checks to see that the file is deleted as expected. Of course, the file system is a messy workspace for portable testing, so you might decide early on to abstract the file system by one step. Ditto with the database by using EF and mocking your DbContext or by using a mocked repository pattern. These abstractions can be pre-TDD application architecture decisions.
Something I do frequently is to use utility code that starts with an IFileSystem interface that declares methods that mimic a lot of what is available in System.IO.File. In production I use an implementation of IFileSystem that just passes through to File.XXX() methods. Then you can mock up and verify the interface instead of trying to setup and cleanup real files.
In this particular method the only thing you can test is that the methods were called. It's ok to do what you are doing by asserting the mock classes. It's up to you to determine if this particular test is valuable or not. TDD assumes tests for everything, but I find it to be more practical to focus your testing on scenarios where it adds value. Hard for others to make that determination, but you should trust yourself to make the call in each specific scenario.
I think integration tests would add the most bang for buck. Use the real DB and FileSystem.
If you have complex logic in the tool, then you may want to restructure the tool design to abstract out the DB and fileSystem and write the unit tests with mocks. From the code snippet you posted, it looks like a simple script to me.

Testing ASP.NET Webpages without using [HostType("ASP.NET")]

So I'm trying to get productive practicing TDD in a ASP.NET webpages project, and launching a server every time I want to run my tests, well it isn't quite fast.
So I'm trying to find a way to do my testing without the use of the [HostType("ASP.NET")] attribute, but there is always some error.
We're using App_GlobalResources folder for our resource files, and this is one of the problems; when removing the attributes, just keeping the [TestMethod] (using MStest), it can't find the resources. So I'm !NOTE assuming, that it's not able to find the resources assembly.
So, has anyone done this before? Any experiences?
And comments saying "why don't you just convert to MVC", well it's just to big an app and to little time. Maybe it'll happen in a couple of years, maybe more, maybe never.
My experience testing ASP.NET Web apps has been painful (which looks like yours is too!) Resisting MVC Comment
My best advice would be take little steps each time you touch an area to make it more testable. For example, for any bits of code you can pull out to it's own assembly that you can reference do so.
First candidate would be you resource files. Then your tests could reference that satellite assembly without the "App_" hoops to jump through.
The approach I took involved creating a presenter class for each page. You create an interface that the page implements with methods & properties that the presenter needs to control the UI. All your logic goes in the presenter, along with a reference to the interface. the page references the presenter and passes itself in.
The benefits you get are the page now should only contain code to make the UI work. The presenter does most of the work. Because it can access the UI, via the interface it can control the UI. Because the access is via an interface, you can test the presenter using a mocked UI.
I found my pages were vastly simplified, with much greater differentiation between code to support the logic of the app & code to make the UI work. It also made it simpler to introduce service classes & IoC which is not always the easiest with webforms.

Is there a way to cover all permutations of parameters in a test

I am writing a Selenium test to validate an input form on my web page. I would like to cover all combinations of input data, but I would not like to write a separate test for each. Right now, I'm using an excel spreadsheet as a data source and I have the combinations listed as each row.
I was hoping there would be a way to cover all the cases without needing to use the excel file or write a separate test for each case. Is there anything that can help with this?
If you can come up with each possibility you want to test for each parameter, then you can do a quick cross-join using multiple from statements in LINQ syntax to come up with every combination of those possibilities.
You may also want to look at Pex. It can analyze your class and generate test methods to test every possible code path. It can be really useful for finding those corner cases that you might not have thought of on your own. Of course, this is only useful if you've written the class in a unit-testable manner. It may not help if your web page's form isn't backed by an MVC action or something of that sort.
do you just need a way to get all combinations to values?
You can do this with linq and various other techniques - see this questions as an example
So generate all input-combinations for your method and then just write a Unittest (potential very long running) in mstest or whatever to check each one.

Refactoring strategy for the class which generates specific text file

I am a TDD noob and I don't know how to solve the following problem.
I have pretty large class which generates text file in a specific format, for import into the external system. I am going to refactor this class and I want to write unit tests before.
How should these tests look like? Actually the main goal - do not break the structure of the file. But this does not mean that I should compare the contents of the file before and after?
I think you would benefit from a test that I would hesitate to call a "unit test" - although arguably it tests the current text-file-producing "unit". This would simply run the current code and do a diff between its output and a "golden master" file (which you could generate by running the test once and copying to its designated location). If there is much conditional behavior in the code, you may want to run this with several examples, each a different test case. With the existing code, by definition, all the tests should pass.
Now start to refactor. Extract a method - or better, write a test for a method that you can envision extracting, a true unit test - extract the method, and ensure that all tests, for the new small method and for the bigger system, still pass. Lather, rinse, repeat. The system tests give you a safety net that lets you go forward in the refactoring with confidence; the unit tests drive the design of the new code.
There are libraries available to make this kind of testing easier (although it's pretty easy even without them). See http://approvaltests.sourceforge.net/.
In such a case I use the following strategy:
Write a test for each method (just covering its default behavior without any error handling etc.)
Run a code coverage tool and find the blocks not covered by the tests. Write tests covering these blocks.
Do this until you get a code coverage of over 80%
Start refactoring the class (mostly generate smaller classes following the separation of concern principle).
Use Test Driven Development for writing the new classes.
Actually, that's a pretty good place to start (comparing a well known output against what is being generated by the current class).
If the single generator class can produce different results, then create one for each case.
This will ensure that you are not breaking your current generator class.
One thing that might help you is if you have the specification document for the current class. You can use that as the base of your refactoring effort.
If you haven't yet, pick up a copy of Michael Feathers' book "Working Effectively with Legacy Code". It's all about how to add tests to existing code, which is exactly what you're looking for.
But until you finish reading the book, I'd suggest starting with a regression test: create the class, have it write the file to disk, and then compare that file to a "known good" file that you've stashed in your source repository somewhere. If they don't match, fail the test.
Then start looking at the interesting decisions that your class makes. See how you can get them under test. Maybe you extract some complicated if-conditions into public functions that return bool, and you write a battery of tests to prove that, given the right inputs, that function returns the right value. Maybe generation of a particular string has some interesting logic; start testing it.
Along the way, you may find objects that want to get out. For example, you may find that the code (or the tests!) would be simpler if there was a separate class that generates a single line of output. Go with it. You've got your regression test to catch you if you screw anything up.
Work relentlessly to remove dependencies (but make sure you've got a higher-level test, like a regression test, to catch you if you make mistakes). If your class creates its own FileStream and writes to the filesystem, change it to take a TextWriter in its constructor instead, so you can write tests that pass in a StringWriter and never touch the file system. Once that's done, you can get rid of the old test that writes a file to disk (but only if you didn't break it while trying to write the new test!) If your class needs a database connection, refactor until you can write a test that passes in fake data. Etc.

Categories

Resources