What mocking frameworks in C# allow for parallel execution? (Thread safety) I was trying RhinoMocks, but it doesn't work well with parallel execution. These tests are not using external resources.
Background: I'm easing other developers into unit testing with MSTest and wanted to use multiple cores. That part appears to run properly.
Update:
Ok, so I originally posted the outline below because I was confident that it might be something within your tests such as shared state -- it couldn't possibly be RhinoMocks, right? right?? Well, to your point I think there's something odd with the framework. My sincere apologies!
Using Reflector to casually look at the RhinoMock source, we can see that MockRepository has an internal static field to represent the current mock repository. Looking at the usages of this field, it appears that this repository is used extensively by the dynamic proxy interceptor which strongly suggests nothing is thread-safe.
Here's an example usage that would use that static field.
LastCall.IgnoreArguments();
So, to answer your question - I would recommend you try out Moq. Based on the source, it appears that the Fluent API is static, but it's marked with [ThreadStatic], so it should be okay. Of course this comes with the caveat that you're doing the initial set up of the mocks on the current thread and not across threads.
Let me know how that works.
How to setup Parallel Test Execution in Visual Studio 2010:
This link outlines the mechanism to enable parallel test execution. In short, you need to manually edit the testsettings file and add the parallelTestCount attribute to the Execution node. Setting the parallelTestCount to 0 means that it will automatically balance the test execution across the available cores on the machine.
<TestSettings>
<!-- etc -->
<Execution parallelTestCount="0">
<!-- etc -->
</Execution>
<!-- etc -->
</TestSettings>
There are a few caveats:
It only works for standard MSTest "unit tests" (there are a variety of different tests you can create (ordered, coded ui, custom test extensions, etc). Standard unit tests are the TestClass/TestMethod variety.
It doesn't support Data Adapters. Thus, no support for parallel database driven tests
It must run locally. (You can't run tests distributed over multiple machines in parallel.)
Lastly, and this is the most important one -- the onus is on you to ensure that your tests are thread safe. This means if you use any kind of static singletons or shared-state, you're going to run into issues.
Related
I understand that ideal unit tests should not share any context between them but I have a dilemma: I am trying to unit test against a piece of software that only has a single license that can be instantiated at a time.
.NET unit tests seem to run in parallel so when I click "Run all tests" the classes are all run simultaneously and most fail because they can't all have a license.
So I see two questions that are mutually exclusive:
How do I share the context between C# unit testing classes?
OR How do I force .NET unit tests to NOT run in parallel?
Clarification: The licensed software is not what I'm trying to test, it's just the tool I need to DO the test
Normally I'd consider Singleton an anti-pattern since it makes unit testing impossible.
But this a good use case to have a Singleton.
A real singleton, with a private constructor and a static constructor, will run only once and will be thread-safe.
This way you can keep your tests running in parallel.
I'm not sure if this is what you might be looking for but would that work if you run all tests at the same time however each of them runs on a separate AppDomain?
For reference I used the cross domain delegates where you could pass your actual test: https://msdn.microsoft.com/en-us/library/system.appdomain.docallback(v=vs.110).aspx
Let me know if that's works!
I am working on writing a tool which
- sets up a connection to Sql and runs a series of stored procedures
- Hits the file system to verify and also delete files
- Talks to other subsystems through exposed APIs
I am new to the concept of TDD but have been doing a lot of reading on it. I wanted apply TDD for this development but I am stuck. There are a lot of interactions with external systems which need to be mocked/stubbed or faked. What I am finding difficult is the proper approach to take in doing this in TDD.. here is a sample of what I would like accomplished.
public class MyConfigurator
{
public static void Start()
{
CheckSystemIsLicenced(); // will throw if its not licenced. Makes call to a library owned by company
CleanUpFiles(); // clean up several directories
CheckConnectionToSql(); //ensure connection to sql can be made
ConfigureSystemToolsOnDatabase(); //runs a set of stored procedure. Range of checks are also implemented and will throw if something goes wrong.
}
}
After this I have another class which cleans up the system if things have gone wrong. For the purpose of this question, its not that relevant but it essentially will just clear certain tables and fix up database so that the tool can run again from scratch to do its configuration tasks.
It almost appears to be here that when using TDD the only tests I end up having are things like (assuming I am using FakeItEasy)
A.CallTo(()=>fakeLicenceChecker.CheckSystemIsLicenced("lickey")).MustHaveHappened();
It just is a whole lot of tests which just appear to be "MustHaveHappened". Am I doing something wrong? Is there a different way to start this project using TDD? Or is this a particular scenario where perhaps TDD is not really recommended? Any guidance would be greatly appreciated.
In your example, if the arrangement of the unit test shows lickey as the input, then it is reasonable to assert that the endpoint has been called with the proper value. In more complex scenarios, the input-to-assert flow covers more subsystems so that the test itself doesn't seem as trivial. You might set up an ID value as input and test that down the line you are outputting a value for an object that is deterministically related to the input ID.
One aspect of TDD is that the code changes while the tests do not - except for functionally equivalent refactoring. So your first tests would naturally arrange and assert data at the outermost endpoints. You would start with a test that writes a real file to the filesystem, calls your code, and then checks to see that the file is deleted as expected. Of course, the file system is a messy workspace for portable testing, so you might decide early on to abstract the file system by one step. Ditto with the database by using EF and mocking your DbContext or by using a mocked repository pattern. These abstractions can be pre-TDD application architecture decisions.
Something I do frequently is to use utility code that starts with an IFileSystem interface that declares methods that mimic a lot of what is available in System.IO.File. In production I use an implementation of IFileSystem that just passes through to File.XXX() methods. Then you can mock up and verify the interface instead of trying to setup and cleanup real files.
In this particular method the only thing you can test is that the methods were called. It's ok to do what you are doing by asserting the mock classes. It's up to you to determine if this particular test is valuable or not. TDD assumes tests for everything, but I find it to be more practical to focus your testing on scenarios where it adds value. Hard for others to make that determination, but you should trust yourself to make the call in each specific scenario.
I think integration tests would add the most bang for buck. Use the real DB and FileSystem.
If you have complex logic in the tool, then you may want to restructure the tool design to abstract out the DB and fileSystem and write the unit tests with mocks. From the code snippet you posted, it looks like a simple script to me.
We are writing an app in MVC/C# on VS2013.
We use selenium web driver for ui tests and have written a framework around it to take make the process more declarative and less procedural. Here is an example
[Test, Category("UITest")]
public void CanCreateMeasureSucceedWithOnlyRequiredFields()
{
ManageMeasureDto dto = new ManageMeasureDto()
{
Name = GetRandomString(10),
MeasureDataTypeId = MeasureDataTypeId.FreeText,
DefaultTarget = GetRandomString(10),
Description = GetRandomString(100)
};
new ManageScreenUITest<MeasureConfigurationController, ManageMeasureDto>(c => c.ManageMeasure(dto), WebDrivers).Execute(ManageScreenExpectedResult.SavedSuccessfullyMessage);
}
The Execute method takes the values in the DTO and calls selenium.sendKeys and a bunch of other methods to actually perform the test, submit the form, assert the response contains what we want.
We have happy with the results of this framework, but it occurs to me that something similar could also be used to create load testing scenarios.
IE, I could have another implementation of the UI test framework that used a HTTPWebRequest to issue the requests to the web server.
From there, I could essentially "gang up" my tests to create a scenario, eg:
public void MeasureTestScenario()
{
CanListMeasures();
CanSearchMeasures();
measureId = CanCreateMeasure();
CanEditMeasure(measureId);
}
The missing piece of the puzzle is how do I run these "scenarios" under load? IE, I'm after a test running that can execute 1..N of these tests of parallel, hopefully do things like Ramp up time, random waits in between each scenario, etc. Ideally the count of concurrent tests would be configurable at run time too, ie, spawn 5 more test threads. Reporting isn't super important because I can log execution times as part of the web request.
Maybe there is another approach too, eg, using C# to control a more traditional load testing tool?
The real advantage to this method is handling more complicated screens, ie for a screen that contains collections of objects. I can get a random parent record from the DB, use my existing automapper code to create the nested dto, have code to walk that dto changing random values then use my test framework to submit that dto's values as a web request. Much easier then hand coding JMeter scripts.
cheers,
dave
While using NUnit as runner for different tasks/tests is OK, I don't think it's the right tool for performance testing.
It uses reflection, it has overhead, it uses eventually the predefined settings for webclient, etc. It's much better if a performance testing tool is used. You can use NUnit to configure/start it :)
I disagree that NUnit isn't usable, with the appropriate logging any functional test runner can be used for performance testing. The overhead does not matter when the test code itself is providing data like when requests are sent and responses are received or when a page is requested and when it finishes loading. Parsing logs files and generating some basic metrics is a pretty trivial task IMO. If it becomes a huge project that changes things but more likely, I could get by with a command line app that takes an hour or two to write.
I think the real question is how much you can leverage by doing it with your normal test runner. If you already have a lot of logging in your test code and can easily invoke it then using NUnit become pretty appealing. If you don't have sufficient logging or a framework that would support some decent load scenarios then using some load testing tools/frameworks would probably be easier.
I am currently working on a c# project which makes use of an SQLite Database. For the project I am required to do unit testing but was told that unit testing shouldn't involve external files, like database files for the testing and instead the test should emulate the database.
If I have a function that tests if a something exists in a database how could this sort of method be tested with a unit testing.
in general it makes life easier if external files are avoided and everything is done in code. There are no rules which says "shouldn't", and sometimes it just makes more sense to have the external dependency. But only after you have considered how not to have it, and realized what the tradeoffs are.
Secondly, what Bryan said is a good option and the one I've used before.
In an application that uses a database, there will be at least one component whose responsibility is to communicate with that database. The unit test for that component could involve a mocked database, but it is perfectly valid (and often desirable) to test the component using a real database. After all, the component is supposed to encapsulate and broker communication with that database -- the unit test should test that. There are numerous strategies to perform such unit tests conveniently -- see the list of related SO questions in the sidebar for examples.
The general prescription to avoid accessing databases in unit tests applies to non-database components. Since non-database components typically outnumber database-related components by a wide margin, the vast majority of unit tests should not involve a database. Indeed, if such non-database components required a database to be tested effectively, there is likely a design problem present -- probably improper separation of concerns.
Thus, the principle that unit tests should avoid databases is generally true, but it is not an absolute rule. It is just a (strong) guideline that aids in structuring complex systems. Following the rule too rigidly makes it very difficult to adequately test "boundary" components that encapsulate external systems -- places in which bugs find it very easy to hide! So, the question that one should really be asking oneself when a unit test demands a database is this: is the component under test legitimately accessing the database directly or should it instead collaborate with another that has that responsibility?
This same reasoning applies to the use of external files and other resources in unit tests as well.
With SQLite, you could use an in-memory database. You can stage your database by inserting data and then run your tests against it.
Once databases get involved it always blurs the line between unit testing and integration testing. Having said that, it is always a nice and very useful test to be able to put something in a database (Setup), Do your test and remove it at the end (Cleanup). This lets you test end to end one part of your application.
Personally I like to do this in an attribute driven fashion. By Specifying the Sql scripts to run for each test as an attribute like so ..
[PreSqlExecute("SetupTestUserDetails.sql")]
[PostSqlExecute("CleanupTestUserDetails.sql")]
public void UpdateUserNameTest()
{
The connectionstrings come from the app.config as usual and can even be a symbolic link to the app.config in your main project.
Unfortunately this isn't a standard feature with the MS test runner that ships with visual studio. If you are using Postsharp as your AOP framework, this is easy to do. If not, you can still get the same functionality for standard MS Test Runner, by using a feature of .Net called "Context Bound Objects". This lets you inject custom code into an object creation chain to do AOP like stuff as long as your objects inherit from ContextBoundObject.
I did a blog post with more details and a small, complete code sample here.
http://www.chaitanyaonline.net/2011/09/25/improving-integration-tests-in-net-by-using-attributes-to-execute-sql-scripts/
I think is really bad idea to have unit tests that depends on database information.
Also I think is a bad idea to use sqlite for unit tests.
You need to test objects protocol, so if you need something in your tests you should create them somewhere in the tests (usually at setUp).
Since is difficult to remove persistence, the popular way to do it is using SQLite, but always create what you need in unit tests.
check this link Unit Tests And Databases this will be more helpful I think
It's best to use a mocking framework, to mimic a database. For C# there is the Entity Framework. Even the use of sqlite is an outside dependency to your code.
So I've written a class and I have the code to test it, but where should I put that code? I could make a static method Test() for the class, but that doesn't need to be there during production and clutters up the class declaration. A bit of searching told me to put the test code in a separate project, but what exactly would the format of that project be? One static class with a method for each of the classes, so if my class was called Randomizer, the method would be called testRandomizer?
What are some best practices regarding organizing test code?
EDIT: I originally tagged the question with a variety of languages to which I thought it was relevant, but it seems like the overall answer to the question may be "use a testing framework", which is language specific. :D
Whether you are using a test framework (I highly recommend doing so) or not, the best place for the unit tests is in a separate assembly (C/C++/C#) or package (Java).
You will only have access to public and protected classes and methods, however unit testing usually only tests public APIs.
I recommend you add a separate test project/assembly/package for each existing project/assembly/package.
The format of the project depends on the test framework - for a .NET test project, use VSs built in test project template or NUnit in your version of VS doesn't support unit testing, for Java use JUnit, for C/C++ perhaps CppUnit (I haven't tried this one).
Test projects usually contain one static class init methods, one static class tear down method, one non-static init method for all tests, one non-static tear down method for all tests and one non-static method per test + any other methods you add.
The static methods let you copy dlls, set up the test environment and clear up the test enviroment, the non-static shared methods are for reducing duplicate code and the actual test methods for preparing the test-specific input, expected output and comparing them.
Where you put your test code depends on what you intend to do with the code. If it's a stand-alone class that, for example, you intend to make available to others for download and use, then the test code should be a project within the solution. The test code would, in addition to providing verification that the class was doing what you wanted it to do, provide an example for users of your class, so it should be well-documented and extremely clear.
If, on the other hand, your class is part of a library or DLL, and is intended to work only within the ecosystem of that library or DLL, then there should be a test program or framework that exercises the DLL as an entity. Code coverage tools will demonstrate that the test code is actually exercising the code. In my experience, these test programs are, like the single class program, built as a project within the solution that builds the DLL or library.
Note that in both of the above cases, the test project is not built as part of the standard build process. You have to build it specifically.
Finally, if your class is to be part of a larger project, your test code should become a part of whatever framework or process flow has been defined for your greater team. On my current project, for example, developer unit tests are maintained in a separate source control tree that has a structure parallel to that of the shipping code. Unit tests are required to pass code review by both the development and test team. During the build process (every other day right now), we build the shipping code, then the unit tests, then the QA test code set. Unit tests are run before the QA code and all must pass. This is pretty much a smoke test to make sure that we haven't broken the lowest level of functionality. Unit tests are required to generate a failure report and exit with a negative status code. Our processes are probably more formal than many, though.
In Java you should use Junit4, either by itself or (I think better) with an IDE. We have used three environments : Eclipse, NetBeans and Maven (with and without IDE). There can be some slight incompatibilities between these if not deployed systematically.
Generally all tests are in the same project but under a different directory/folder. Thus a class:
org.foo.Bar.java
would have a test
org.foo.BarTest.java
These are in the same package (org.foo) but would be organized in directories:
src/main/java/org/foo/Bar.java
and
src/test/java/org/foo/BarTest.java
These directories are universally recognised by Eclipse, NetBeans and Maven. Maven is the pickiest, whereas Eclipse does not always enforce strictness.
You should probably avoid calling other classes TestPlugh or XyzzyTest as some (old) tools will pick these up as containing tests even if they don't.
Even if you only have one test for your method (and most test authorities would expect more to exercise edge cases) you should arrange this type of structure.
EDIT Note that Maven is able to create distributions without tests even if they are in the same package. By default Maven also requires all tests to pass before the project can be deployed.
Most setups I have seen or use have a separate project that has the tests in them. This makes it a lot easier and cleaner to work with. As a separate project it's easy to deploy your code without having to worry about the tests being a part of the live system.
As testing progresses, I have seen separate projects for unit tests, integration tests and regression tests. One of the main ideas for this is to keep your unit tests running as fast as possible. Integration & regression tests tend to take longer due to the nature of their tests (connecting to databases, etc...)
I typically create a parallel package structure in a distinct source tree in the same project. That way your tests have access to public, protected and even package-private members of the class under test, which is often useful to have.
For example, I might have
myproject
src
main
com.acme.myapp.model
User
com.acme.myapp.web
RegisterController
test
com.acme.myapp.model
UserTest
com.acme.myapp.web
RegisterControllerTest
Maven does this, but the approach isn't particularly tied to Maven.
This would depend on the Testing Framework that you are using. JUnit, NUnit, some other? Each one will document some way to organize the test code. Also, if you are using continuous integration then that would also affect where and how you place your test. For example, this article discusses some options.
Create a new project in the same solution as your code.
If you're working with c# then Visual Studio will do this for you if you select Test > New Test... It has a wizard which will guide you through the process.
hmm. you want to test random number generator... may be it will be better to create strong mathematical proof of correctness of algorithm. Because otherwise, you must be sure that every sequence ever generated has a desired distribution
Create separate projects for unit-tests, integration-tests and functional-tests. Even if your "real" code has multiple projects, you can probably do with one project for each test-type, but it is important to distinguish between each type of test.
For the unit-tests, you should create a parallel namespace-hierarchy. So if you have crazy.juggler.drummer.Customer, you should unit-test it in crazy.juggler.drummer.CustomerTest. That way it is easy to see which classes are properly tested.
Functional- and integration-tests may be harder to place, but usually you can find a proper place. Tests of the database-layer probably belong somewhere like my.app.database.DatabaseIntegrationTest. Functional-tests might warrant their own namespace: my.app.functionaltests.CustomerCreationWorkflowTest.
But tip #1: be tough about separating the various kind of tests. Especially be sure to keep the collection of unit-tests separate from the integration-tests.
In the case of C# and Visual Studio 2010, you can create a test project from the templates which will be included in your project's solution. Then, you will be able to specify which tests to fire during the building of your project. All tests will live in a separate assembly.
Otherwise, you can use the NUnit Assembly, import it to your solution and start creating methods for all the object you need to test. For bigger projects, I prefer to locate these tests inside a separate assembly.
You can generate your own tests but I would strongly recommend using an existing framework.