Right now my Coded UI Tests use their app.config to determine the domain they execute in, which has a 1-1 relationship with environment. To simplify it:
www.test.com
www.UAT.com
www.prod.com
and in App.config I have something like:
<configuration>
<appSettings>
<add key="EnvironmentURLMod" value ="test"/>
and to run the test in a different environment, I manually change the value between runs. For instance the I open the browser like this:
browserWindow.NavigateToUrl(new Uri("http://www."
+ ConfigurationManager.AppSettings.Get("EnvironmentURLMod")
+ ".com"));
Clearly this is inelegant. I suppose I had a vision where we'd drop in a new app.config for each run, but as a spoiler this test will be run in ~10 environments, not 3, and which environments it may run may change.
I know I could decouple these environment URL modifications to yet another XML file, and make the tests access them sequentially in a data-driven scenario. But even this seems like it's not quite what I need, since if one environment fails then the whole test collapses. I've seen Environment Variables as a suggestion, but this would require creating a test agent for each environment, modifying their registries, and running the tests on each of them. If that's what it takes then sure, but it seems like an enormous amount of VM bandwidth to be used for what's a collection of strings.
In an ideal world, I would like to tie these URL mods to something like Test Settings, MTM environments, or builds. I want to execute the suite of tests for each domain and report separately.
In short, what's the best way to parameterize these tests? Is there a way that doesn't involve queuing new builds, or dropping config files? Is Data Driven Testing the answer? Have I structured my solution incorrectly? This seems like it should be such a common scenario, yet my googling doesn't quite get me there.
Any and all help appreciated.
The answer here is data driven testing, and unfortunately there's no total silver bullet even if there's a "Better than most" option.
Using any data source lets you iterate through a test in multiple environments (or any other variable you can think of) and essentially return 3 different test results - one for each permutation or data row. However you'll have to update your assertions to show which environment you're currently executing in, as the test results only show "Data Row 0" or something similar by default. If the test passes, you'll get no clue as to what's actually in the data row for the successful run, unless you embed this information in the action log! I'm lucky that my use case does this automatically since I'm just using a URL mod, but other people may need to do that on their own.
To allow on-the-fly changing of what environments we're testing in, we chose to use a TestCase data source. This has a lot of flexibility - potentially more than using a database or XML for instance - but it comes with its own downsides. Like all data driven scenarios, you have to essentially "Hard Code" the test case ID into the decorator above your test method (Because it's considered a property). I was hoping we could drop an app.config into the build drop location when we wanted to change which test case we used, at least, but it looks like instead we're going to have to do a find + replace across a solution instead.
If anyone knows of a better way to decouple the test ID or any other part of the connection string from the code, I'll give you an answer here. For anyone else, you can find more information on MSDN.
Related
I'm new to .Net and hence don't know if the approach which I'm following below is correct or is there any better way to do it. Can someone please suggest ?
I want to keep different set of configuration for different environments (DEV,QA, UAT, etc) and based on user input load that environment config and start my Nunit tests.
I'm planning to create a different resource file for each of these for ex - QA.resx, DEV.resx, etc and then just load specific resource file based on user input.
for ex.
QA.resx will have
hostname=sample.qa.com
port=1234
DEV.resx will have
hostname=sample.dev.com
port=4321
And then at runtime if I specify something like env=DEV then it should load configuration from DEV.resx and start running the test cases.
Is this a good approach ?
Is this a good approach ?
I don't think so.
First of all, your Unit Test should not depend on environment you're using. True Unit Test should not have any external dependency, like database, file system, external services, etc. Thus UT execution should be the same no matter whether it's launched on a developer workstation or CI server.
If your application requires different configurations for different environments (it's a very common case), the best choice is to use config transformations. Check this article for details.
I am debugging a .net(C#) app in VS2013. I am calling a rest service and it returns data as a list of phone calls that have been made between two given dates.
So the structure of List is something like CallDetails.Calls and then Calls property has several child properties like, call duration, timeofcall, priceperminute etc etc.
Since I am debugging the app, I want to avoid every time hitting the server where the rest service is hosted.
So my question is simply that if there is a way that when I have received the List of data items, I (kind of) copy and paste data into a file and later use it in a statically defined List instead of fetching it from the server?
In case someone wonders why would I want to do that, there is some caching of all incoming requests on the server and after a while it gets full so the server does not return data and ultimately a time out error occurs.
I know that should be solved on the server some how but that is not possible today that is why I am putting this question.
Thanks
You could create a unit test using MSTest or NUnit. I know unit tests are scary if you haven't used them before, but for simple automated tests they are awesome. You don't have to worry about lots of the stuff people talk about for "good unit testing" in order to get started testing this one item. Once you get the list from in the debugger while testing, you could then
save it out to a text file,
manually (one time) build the code to copy it from the text file,
Use that code as the set-up for your test.
MSTest tutorial: https://msdn.microsoft.com/en-us/library/ms182524%28v=vs.90%29.aspx
NUnit is generally considered superior to MSTest, and in particular has a feature that is better for running a set of data through the same test. Based on what you've said, I don't think you need the NUnit test case, but I've never used it. If you do want it, after the quickstart guide, the keyword to look for is testcase if you want to go that route. http://nunitasp.sourceforge.net/quickstart.html
I am working on writing a tool which
- sets up a connection to Sql and runs a series of stored procedures
- Hits the file system to verify and also delete files
- Talks to other subsystems through exposed APIs
I am new to the concept of TDD but have been doing a lot of reading on it. I wanted apply TDD for this development but I am stuck. There are a lot of interactions with external systems which need to be mocked/stubbed or faked. What I am finding difficult is the proper approach to take in doing this in TDD.. here is a sample of what I would like accomplished.
public class MyConfigurator
{
public static void Start()
{
CheckSystemIsLicenced(); // will throw if its not licenced. Makes call to a library owned by company
CleanUpFiles(); // clean up several directories
CheckConnectionToSql(); //ensure connection to sql can be made
ConfigureSystemToolsOnDatabase(); //runs a set of stored procedure. Range of checks are also implemented and will throw if something goes wrong.
}
}
After this I have another class which cleans up the system if things have gone wrong. For the purpose of this question, its not that relevant but it essentially will just clear certain tables and fix up database so that the tool can run again from scratch to do its configuration tasks.
It almost appears to be here that when using TDD the only tests I end up having are things like (assuming I am using FakeItEasy)
A.CallTo(()=>fakeLicenceChecker.CheckSystemIsLicenced("lickey")).MustHaveHappened();
It just is a whole lot of tests which just appear to be "MustHaveHappened". Am I doing something wrong? Is there a different way to start this project using TDD? Or is this a particular scenario where perhaps TDD is not really recommended? Any guidance would be greatly appreciated.
In your example, if the arrangement of the unit test shows lickey as the input, then it is reasonable to assert that the endpoint has been called with the proper value. In more complex scenarios, the input-to-assert flow covers more subsystems so that the test itself doesn't seem as trivial. You might set up an ID value as input and test that down the line you are outputting a value for an object that is deterministically related to the input ID.
One aspect of TDD is that the code changes while the tests do not - except for functionally equivalent refactoring. So your first tests would naturally arrange and assert data at the outermost endpoints. You would start with a test that writes a real file to the filesystem, calls your code, and then checks to see that the file is deleted as expected. Of course, the file system is a messy workspace for portable testing, so you might decide early on to abstract the file system by one step. Ditto with the database by using EF and mocking your DbContext or by using a mocked repository pattern. These abstractions can be pre-TDD application architecture decisions.
Something I do frequently is to use utility code that starts with an IFileSystem interface that declares methods that mimic a lot of what is available in System.IO.File. In production I use an implementation of IFileSystem that just passes through to File.XXX() methods. Then you can mock up and verify the interface instead of trying to setup and cleanup real files.
In this particular method the only thing you can test is that the methods were called. It's ok to do what you are doing by asserting the mock classes. It's up to you to determine if this particular test is valuable or not. TDD assumes tests for everything, but I find it to be more practical to focus your testing on scenarios where it adds value. Hard for others to make that determination, but you should trust yourself to make the call in each specific scenario.
I think integration tests would add the most bang for buck. Use the real DB and FileSystem.
If you have complex logic in the tool, then you may want to restructure the tool design to abstract out the DB and fileSystem and write the unit tests with mocks. From the code snippet you posted, it looks like a simple script to me.
I've recently started pushing for TDD where I work. So far things are going well. We're writing tests, we're having them run automatically on commit, and we're always looking to improve our process and tools.
One thing I've identified that could be improved is how we set up our Test Data. In strictly unit tests, we often find ourselves instantiating and populating complex CLR objects. This is a pain, and typically the test is then only run on a handful of cases.
What I'd like to push for is Data Driven tests. I think we should be able to load our test data from files or maybe even generate them on the fly from a schema (though I would only consider doing it on the fly if I could generate every possible configuration of an object, and that number of configurations was small). And there is my problem.
I have yet to find a good strategy for generating test data for C# CLR objects.
I looked into generating XML data from XSDs and then loading that into my tests using the DataSourceAttribute. The seemed like a good approach, but I ran into troubles generating XSD files. xsd.exe falls over because our classes have interface members. I also tried using svcutil.exe on our assembly, but because our code is monolithic the output is huge and tricky (many interdependent .xsd files).
What are other techniques for generating test data? Ideally the generator would follow a schema (maybe an xsd, but preferably the class itself), and could be scripted.
Technical notes (not sure if this is even relevant, but it can't hurt):
We're using Visual Studio's unit testing framework (defined in Microsoft.VisualStudio.TestTools.UnitTesting).
We're using RhinoMocks
Thanks
Extra Info
One reason I'm interested in this is to test an Adapter class we have. It takes a complex and convoluted legacy Entity and converts it to a DTO. The legacy Entity is a total mess of spaghetti and can not be easily split up into logical sub-units defined by interfaces (as suggested). That would be a nice approach, but we don't have that luxury.
I would like to be able to generate a large number of configurations of this legacy Entity and run them through the adapter. The larger the number of configurations, the more likely my test will fail when the next developer (oblivious to 90% of the application) changes the schema of the legacy Entity.
UPDATE
Just to clarify, I am not looking to generate random data for each execution of my tests. I want to be able to generate data to cover multiple configurations of complex objects. I want to generate this data offline and store it as static input for my tests.
I just reread my question and noticed that I had in fact originally ask for random on the fly generation. I'm surprised I ask for that! I've updated the question to fix that. Sorry about the confusion.
What you need is a tool such as NBuilder (http://code.google.com/p/nbuilder).
This allows you to describe objects, then generate them. This is great for unit testing.
Here is a very simple example (but you can make it as complex as you want):
var products = Builder<Product>
.CreateListOfSize(10)
.All().With(x => x.Title = "some title")
.And(x => x.AnyProperty = RandomlyGeneratedValue())
.And(x => x.AnyOtherProperty = OtherRandomlyGeneratedValue())
.Build();
In my experience, what you're looking to accomplish ends up actually being harder to implement and maintain than generating objects in code on a test-by-test basis.
I worked with a client that had a similar issue, and they ended up storing their objects as JSON and deserializing them, with the expectation that it would be easier to maintain and extend. It wasn't. You know what you don't get when editing JSON? Compile-time syntax checking. They just ended up with tests breaking because of JSON that failed to deserialize due to syntax errors.
One thing you can do to reduce your pain is to code to small interfaces. If you have a giant object with a ton of properties, a given method that you'd like to test will probably only need a handful. So instead of your method taking SomeGiantClass, have it take a class that implements ITinySubset. Working with the smaller subset will make it much more obvious what things need to be populated in order for your test to have any validity.
I agree with the other folks who have said that generating random data is a bad idea. I'd say it's a really bad idea. The goal of unit testing is repeatability, which goes zooming out the window the second you generate random data. It's a bad idea even if you're generating the data "offline" and then feeding it in. You have no guarantee that the test object that you generated is actually testing anything worthwhile that's not covered in other tests, or if it's testing valid conditions.
More tests doesn't mean that your code is better. 100% code coverage doesn't mean that your code is bug-free and working properly. You should aim to test the logic that you know matters to your application, not try to cover every single imaginable case.
This is a little different then what you are talking about, but have you looked at Pex? Pex will attempt to generate inputs that cover all of the paths of your code.
http://research.microsoft.com/en-us/projects/Pex/
Generating test data is often an inappropriate and not very useful way of testing - particuarly if you are generating a different set of test data (eg randomly each time) as sometimes a test run will fail and sometimes it wont. It also may be totally irrelevant to what your doing and will make for a confusing group of tests.
Tests are supposed to help document + formalise the specification of a piece of software. If the boundaries of the software are found through bombarding the system with data then these wont be documented properly. They also provide a way of communicating through code that is different from the code itself and as a result are often most useful if they are very specific and easy to read and understand.
That said if you really want to do it though typically you can write your own generator as a test class. I've done this a few times in the past and it works nicely, with the added bonus that you can see exactly what it's doing. You also already know the constraints of the data so there's no problem trying to generalise an approach
From what you say the pain you are having is in setting up objects. This is a common testing issue - I'd suggest focusing on that by making fluent builders for your common object types - this gives you a nice way of filling in less detail every time (you typically would provide only the interesting data (for a given test case) and have valid defaults for everything else). They also reduce the number of dependencies on constructors in test code which means your tests are less likely to get in the way of refactoring later on if you need to change them. You can really get a lot of mileage out of that approach. You can further extend it by having common setup code for builders when you get a lot of them that is a natural point for developers to hang reusable code.
In one system I've worked on we ended up aggregating all these sorts of things together into something which could switch on + off different seams in the application (file access etc), provided builders for objects and setup a comprehensive set of fake view classes (for wpf) to sit on top of our presenters. It effectively provided a test friendly interface for scripting and testing the entire application from very high-level things to very low-level things. Once you get there you're really in the sweet spot as you can write tests that effectively mirror button clicks in the application at a very high level but you have very easy to refactor code as there are few direct dependencies on your real classes in the tests
Actually, there is a Microsoft's way of expressing object instances in markup, and that is XAML.
Don't be scared with the WPF paradigm in the documentation. All you need to do is use correct classes in unit tests to load the objects.
Why I would do this? because Visual Studio project will automatically give you XAML syntax and probably intellisense support when you add this file.
What would be a small problem? markup element classes must have parameterless constructors. But that problem is always present and there are workarounds (e.g. here).
For reference, have a look at:
Create object from text in XAML, and
How to convert XAML File to objects, and
How to Deserialize XML document.
I wish I could show you something done by me on this matter, but I can't.
I am a TDD noob and I don't know how to solve the following problem.
I have pretty large class which generates text file in a specific format, for import into the external system. I am going to refactor this class and I want to write unit tests before.
How should these tests look like? Actually the main goal - do not break the structure of the file. But this does not mean that I should compare the contents of the file before and after?
I think you would benefit from a test that I would hesitate to call a "unit test" - although arguably it tests the current text-file-producing "unit". This would simply run the current code and do a diff between its output and a "golden master" file (which you could generate by running the test once and copying to its designated location). If there is much conditional behavior in the code, you may want to run this with several examples, each a different test case. With the existing code, by definition, all the tests should pass.
Now start to refactor. Extract a method - or better, write a test for a method that you can envision extracting, a true unit test - extract the method, and ensure that all tests, for the new small method and for the bigger system, still pass. Lather, rinse, repeat. The system tests give you a safety net that lets you go forward in the refactoring with confidence; the unit tests drive the design of the new code.
There are libraries available to make this kind of testing easier (although it's pretty easy even without them). See http://approvaltests.sourceforge.net/.
In such a case I use the following strategy:
Write a test for each method (just covering its default behavior without any error handling etc.)
Run a code coverage tool and find the blocks not covered by the tests. Write tests covering these blocks.
Do this until you get a code coverage of over 80%
Start refactoring the class (mostly generate smaller classes following the separation of concern principle).
Use Test Driven Development for writing the new classes.
Actually, that's a pretty good place to start (comparing a well known output against what is being generated by the current class).
If the single generator class can produce different results, then create one for each case.
This will ensure that you are not breaking your current generator class.
One thing that might help you is if you have the specification document for the current class. You can use that as the base of your refactoring effort.
If you haven't yet, pick up a copy of Michael Feathers' book "Working Effectively with Legacy Code". It's all about how to add tests to existing code, which is exactly what you're looking for.
But until you finish reading the book, I'd suggest starting with a regression test: create the class, have it write the file to disk, and then compare that file to a "known good" file that you've stashed in your source repository somewhere. If they don't match, fail the test.
Then start looking at the interesting decisions that your class makes. See how you can get them under test. Maybe you extract some complicated if-conditions into public functions that return bool, and you write a battery of tests to prove that, given the right inputs, that function returns the right value. Maybe generation of a particular string has some interesting logic; start testing it.
Along the way, you may find objects that want to get out. For example, you may find that the code (or the tests!) would be simpler if there was a separate class that generates a single line of output. Go with it. You've got your regression test to catch you if you screw anything up.
Work relentlessly to remove dependencies (but make sure you've got a higher-level test, like a regression test, to catch you if you make mistakes). If your class creates its own FileStream and writes to the filesystem, change it to take a TextWriter in its constructor instead, so you can write tests that pass in a StringWriter and never touch the file system. Once that's done, you can get rid of the old test that writes a file to disk (but only if you didn't break it while trying to write the new test!) If your class needs a database connection, refactor until you can write a test that passes in fake data. Etc.