How to unit test methods that use threads? - c#

How to write unit tests for methods that use threads.
In example below how to test someMethod method ?
public class SomeClass {
private final SomeOtherClass someOtherClassInstance = IoC.getInstance(SomeOtherClass.class);
private final ExecutorService executorService = Executors.newCachedThreadPool();
public void someMethod() {
executorService.execute(new Runnable() {
#Override
public void run() {
someOtherClassInstance.someOtherMethod();
}
});
}
}
Are there any solutions in java and .net for this purpose?

Your question begs 'what is a unit test' (UT). UTs are just one form of automated tests. In this case your question implies that the code calls the OS to start a thread, so by definition, a test that tested it would not be a unit test but probably be an integration test.
Why I bother you with what seems like semantics is that understanding the intent of the test makes it so easy to write the test and structure the code. The amount of working code out there is larger than the amount of testable code out there.
So, how can this code be changed to be unit testable (I will skip the 'why bother stuff') ...
A unit test in C# tests a single type (your class) ... importantly (by definition) nothing else. So your code needs to reflect this. What you want to test is that when I call ABC it does this stuff. Something that stuff includes launching a thread. So you want to test that the method to launch a thread a called. The fundamental here is that your application requires a thread so that is what your asserting.
So how? Create a proxy, and perhaps a factory, for the thread creation. Then you can assert in a unit test that it was called and how it was treated. Sounds hard, but is really easy when you get into the habit.
BTW, Resharper makes creating a proxy for an OS types (e.g. thread). Check out the delegating members stuff in their help. If your not using Resharper you should be.
Yea I know this is not the answer you had hopped for, but believe me it is the answer you need.
I find it useful to think of tests in the categories:
Unit tests
Integration tests (or perhaps subsystem tests)
Functional test (e.g. UI/application/UAT tests)

The jMock Team has described an brilliant approach for Java in the 'jMock Cookbook: Test Multithreaded Code'. The concept of executing "thread" functionality on the same thread as the test can also be applied to C# code (see this blog post). The C# test would look like this:
[Test]
public void TestMultiThreadingCodeSynchronously()
{
// Arrange
SomeOtherClass someOtherClassMock = MockRepository.GenerateMock<SomeOtherClass>();
DeterministicTaskScheduler taskScheduler = new DeterministicTaskScheduler();
SomeClass systemUnderTest = new SomeClass(taskScheduler, someOtherClassMock);
// Act
systemUnderTest.SomeMethod();
// Now execute the new task on
// the current thread.
taskScheduler.RunTasksUntilIdle();
// Assert
someOtherClassMock.AssertWasCalled(x=>x.SomeOtherMethod());
}
Your "system under test" would look like this:
public class SomeClass
{
private readonly TaskScheduler taskScheduler;
private readonly SomeOtherClass instance;
public SomeClass(
TaskScheduler taskScheduler,
SomeOtherClass instance)
{
this.taskScheduler = taskScheduler;
this.instance = instance;
}
public void SomeMethod()
{
Task.Factory.StartNew(
instance.SomeOtherMethod,
new CancellationToken(),
TaskCreationOptions.None,
taskScheduler);
}
}
The C# solution is described in detail in this blog post.

If there is another thread, you have wait until the method has been called. You have to decide how long you should wait before considering the method wasn't called, there is no tool which can do that for you.
Of course you can mock the ExecutorService to run in the current thread and avoid the issue.

Related

How to create a restful web service with TDD approach?

I've been given a task of creating a restful web service with JSON formating using WCF with the below methods using TDD approach which should store the Product as a text file on disk:
CreateProduct(Product product)
GetAProduct(int productId)
URI Templates:
POST to /MyService/Product
GET to /MyService/Product/{productId}
Creating the service and its web methods are the easy part but
How would you approach this task with TDD? You should create a test before creating the SUT codes.
The rules of unit tests say they should also be independent and repeatable.
I have a number of confusions and issues as below:
1) Should I write my unit tests against the actual service implementation by adding a reference to it or against the urls of the service (in which case I'd have to host and run the service)? Or both?
2)
I was thinking one approach could be just creating one test method inside which I create a product, call the CreateProduct() method, then calling the GetAProduct() method and asserting that the product which was sent is the one that I have received. On TearDown() event I just remove the product which was created.
But the issues I have with the above is that
It tests more than one feature so it's not really a unit test.
It doesn't check whether the data was stored on file correctly
Is it TDD?
If I create a separate unit test for each web method then for example for calling GetAProduct() web method, I'd have to put some test data stored physically on the server since it can't rely on the CreateProduct() unit tests. They should be able to run independently.
Please advice.
Thanks,
I'd suggest not worrying about the web service end points and focus on behavior of the system. For the sake of this discussion I'll drop all technical jargon and talk about what I see as the core business problem you're trying to solve: Creating a Product Catalog.
In order to do so, start by thinking through what a product catalog does, not the technical details about how to do it. Use that as your starting points for your tests.
public class ProductCatalogTest
{
[Test]
public void allowsNewProductsToBeAdded() {}
[Test]
public void allowsUpdatesToExistingProducts() {}
[Test]
public void allowsFindingSpecificProductsUsingSku () {}
}
I won't go into detail about how to implement the tests and production code here, but this is a starting point. Once you've got the ProductCatalog production class worked out, you can turn your attention to the technical details like making a web service and marshaling your JSON.
I'm not a .NET guy, so this will be largely pseudocode, but it probably winds up looking something like this.
public class ProductCatalogServiceTest
{
[Test]
public void acceptsSkuAsParameterOnGetRequest()
{
var mockCatalog = new MockProductCatalog(); // Hand rolled mock here.
var catalogService = new ProductCatalogService(mockCatalog);
catalogService.find("some-sku-from-url")
mockCatalog.assertFindWasCalledWith("some-sku-from-url");
}
[Test]
public void returnsJsonFromGetRequest()
{
var mockCatalog = new MockProductCatalog(); // Hand rolled mock here.
mockCatalog.findShouldReturn(new Product("some-sku-from-url"));
var mockResponse = new MockHttpResponse(); // Hand rolled mock here.
var catalogService = new ProductCatalogService(mockCatalog, mockResponse);
catalogService.find("some-sku-from-url")
mockCatalog.assertWriteWasCalledWith("{ 'sku': 'some-sku-from-url' }");
}
}
You've now tested end to end, and test drove the whole thing. I personally would test drive the business logic contained in ProductCatalog and likely skip testing the marshaling as it's likely to all be done by frameworks anyway and it takes little code to tie the controllers into the product catalog. Your mileage may vary.
Finally, while test driving the catalog, I would expect the code to be split into multiple classes and mocking comes into play there so they would be unit tested, not a large integration test. Again, that's a topic for another day.
Hope that helps!
Brandon
Well to answer your question what I would do is to write the test calling the rest service and use something like Rhino Mocks to arrange (i.e setup an expectation for the call), act (actually run the code which calls the unit to be tested and assert that you get back what you expect. You could mock out the expected results of the rest call. An actual test of the rest service from front to back would be an integration test not a unit test.
So to be clearer the unit test you need to write is a test around what actually calls the rest web service in the business logic...
Like this is your proposed implementation (lets pretend this hasn't even been written)
public class SomeClass
{
private IWebServiceProxy proxy;
public SomeClass(IWebServiceProxy proxy)
{
this.proxy = proxy;
}
public void PostTheProduct()
{
proxy.Post("/MyService/Product");
}
public void REstGetCall()
{
proxy.Get("/MyService/Product/{productId}");
}
}
This is one of the tests you might consider writing.
[TestFixture]
public class TestingOurCalls()
{
[Test]
public Void TestTheProductCall()
{
var webServiceProxy = MockRepository.GenerateMock<IWebServiceProxy>();
SomeClass someClass = new SomeClass(webServiceProxy);
webServiceProxy.Expect(p=>p.Post("/MyService/Product"));
someClass.PostTheProduct(Arg<string>.Is.Anything());
webServiceProxy.VerifyAllExpectations();
}
}

Holding a Value in Unit tests

I have a Question for you guys.I have 2 unit tests which are calling webservices .The value that one unit-test returns should be used for another unit test method
Example
namespace TestProject1
{
public class UnitTest1
{
String TID = string.empty;
public void test1
{
//calling webservices and code
Assert.AreNotEqual(HoID, hID);
TID = hID;
}
public void test2
{
//calling webservices and code
string HID = TID // I need the TID value from the Above testcase here
Assert.AreNotEqual(HID, hID);
}
}
}
How can i store a value in one unittest and use that value in another unittest.
In general, you shouldn't write your tests like this. You cannot ensure that your tests will run in any particular order, so there's no nice way to do this.
Instead make the tests independent, but refactor the common part into it's own (non-test) method that you can call as part of your other test.
Don't reuse any values. Order in which tests are run is very often random (most common runners like NUnit's and Resharper's run tests in random order, some might even do so in parallel). Instead, simply call web service again (even if that means having 2 web service calls) in your second test and retrieve the value you need.
Each test (whether it's unit or integration) should have all the data/dependencies available for it to run. You should never rely on other tests to setup environment/data as that's not what they're written for.
Think of your tests in isolation - each single test is a separate being, that sets up, executes, and cleans up all that is necessary to exercise particular scenario.
Here's an example, following the outlines of Oleksi, of how you could organize this
String TID = string.empty;
[TestFixtureSetUp]
public void Given() {
//calling webservices and code
TID = hID;
//calling webservices and code
}
[Test]
public void assertions_call_1() {
...
}
public void assertions_on_call_2() {
if (string.IsNullOrEmpty(TID))
Assert.Inconclusive("Prerequisites for test not met");
...
}

NUnit copying ILogicalThreadAffinative items in CallContext to new threads

I've run into an issue with NUnit and CallContext (using C#) where NUnit is copying the anything in the existing call context that extends ILogicalThreadAffinative when a new thread is created. For example, in the following example an exception is always thrown in the newly-created thread:
[Test]
public void TestCopiedCallContext()
{
Foo f = new Foo();
f.a = 1;
CallContext.SetData("Test", f);
new Thread(new ThreadStart(delegate()
{
if (CallContext.GetData("Test") != null)
{
throw new Exception("Bad!");
}
})).Start();
Thread.Sleep(500);
}
class Foo : ILogicalThreadAffinative
{
public int a;
}
If Foo doesn't extend ILogicalThreadAffinative then the test passes. I'm using .NET 2.0 (due to other restrictions we cannot use newer versions of .NET). I've also tried using the Requires* attributes available in the latest version of NUnit but with no success. Does anyone know how to turn this behavior off?
I dont beleive that you can do what you are attempting to do. One person has suggested putting the code into an assembly where the test runner has access to it.
There is a blog post that you probably know about, that describes what the issue is.
Unit testing code that does multithreading can be challenging and I tend to isolate threads and wrap static objects.
If it were me, I think that I would try to isolate CallContext.SetData and CallContext.GetData by wrapping call context in a class CallContextWrapper : ICallContextWrapper.
The I would test that my code uses contextWrapper.SetData("Test", f) and be done with it;
I would trust that whoever wrote CallContext tested it's ability to take in some data and transfer it to a new thread. IMO CallContext is framework code that should have already been tested so you just need to isolate your code's dependency on CallContext.

Mocking a blocking call with Rhino Mocks

I'm currently building a class using TDD. The class is responsible for waiting for a specific window to become active, and then firing some method.
I'm using the AutoIt COM library (for more information about AutoIt look here) since the behavior I want is actually a single method in AutoIt.
The code is pretty much as the following:
public class WindowMonitor
{
private readonly IAutoItX3 _autoItLib;
public WindowMonitor(IAutoItX3 autoItLib)
{
_autoItLib = autoItLib;
}
public void Run() // indefinitely
{
while(true)
{
_autoItLib.WinWaitActive("Open File", "", 0);
// Do stuff now that the window named "Open File" is finally active.
}
}
}
As you can see the AutoIt COM library implements an interface wich I can mock (Using NUnit and Rhino Mocks):
[TestFixture]
public class When_running_the_monitor
{
WindowMonitor subject;
IAutoItX3 mockAutoItLibrary;
AutoResetEvent continueWinWaitActive;
AutoResetEvent winWaitActiveIsCalled;
[SetUp]
public void Setup()
{
// Arrange
mockAutoItLibrary = MockRepository.GenerateStub<IAutoItX3>();
mockAutoItLib.Stub(m => m.WinWaitActive("", "", 0))
.IgnoreArguments()
.Do((Func<string, string, int, int>) ((a, b, c) =>
{
winWaitActiveIsCalled.Set();
continueWinWaitActive.WaitOne();
return 1;
}));
subject = new Subject(mockAutoItLibrary)
// Act
new Thread(new ThreadStart(subject.Run)).Start();
winWaitActiveIsCalled.WaitOne();
}
// Assert
[Test]
[Timeout(1000)]
public void should_call_winWaitActive()
{
mockAutoItLib.AssertWasCalled(m => m.WinWaitActive("Bestand selecteren", "", 0));
}
[Test]
[Timeout(1000)]
public void ensure_that_nothing_is_done_while_window_is_not_active_yet()
{
// When you do an "AssertWasCalled" for the actions when the window becomes active, put an equivalent "AssertWasNotCalled" here.
}
}
The problem is, the first test keeps timing out. I have already found out that when the stub "WinWaitActive" is called, it blocks (as intended, on the seperate thread), and when the "AssertWasCalled" is called after that, execution never returns.
I'm at a loss how to proceed, and I couldn't find any examples of mocking out a blocking call.
So in conclusion:
Is there a way to mock a blocking call without making the tests timeout?
(P.S. I'm less interested in changing the design (i.e. "Don't use a blocking call") since it may be possible to do that here, but I'm sure there are cases where it's a lot harder to change the design, and I'm interested in the more general solution. But if it's simply impossible to mock blocking calls, suggestions like that are more that welcome!)
Not sure if I understand the problem.
Your code is just calling a method on the mock (WinWaitActive). Of course, it can't proceed before the call returns. This is in the nature of the programming language and nothing you need to test.
So if you test that WinWaitActive gets called, your test is done. You could test if WinWaitActive gets called before anything else, but this requires ordered expectations, which requires the old style rhino mocks syntax and is usually not worth to do.
mockAutoItLibrary = MockRepository.GenerateStub<IAutoItX3>();
subject = new Subject(mockAutoItLibrary)
subject.Run()
mockAutoItLib.AssertWasCalled(m => m.WinWaitActive("Open File", "", 0));
You don't do anything else then calling a method ... so there isn't anything else to test.
Edit: exit the infinite loop
You could make it exit the infinite loop by throwing an exception from the mocks. This is not very nice, but it avoids having all this multi-threading stuff in the unit test.
mockAutoItLibrary = MockRepository.GenerateStub<IAutoItX3>();
// make loop throw an exception on second call
// to exit the infinite loop
mockAutoItLib
.Stub(m => m.WinWaitActive(
Arg<string>.Is.Anything,
Arg<string>.Is.Anything,
Arg<int>.Is.Anything));
.Repeat.Once();
mockAutoItLib
.Stub(m => m.WinWaitActive(
Arg<string>.Is.Anything,
Arg<string>.Is.Anything,
Arg<int>.Is.Anything));
.Throw(new StopInfiniteLoopException());
subject = new Subject(mockAutoItLibrary)
try
{
subject.Run()
}
catch(StopInfiniteLoopException)
{} // expected exception thrown by mock
mockAutoItLib.AssertWasCalled(m => m.WinWaitActive("Open File", "", 0));
Your Test only contains a call to the mocked method. Therefore it tests only your mock instead of any real code, which is an odd thing to do. We might need a bit more context to understand the problem.
Use Thread.Sleep() instead of AutoResetEvents: Since you are mocking the COM object that does the blocking window-active check, you can just wait for some time to mimick the behavior, and then make sure that the window is indeed active by making it active programmatically. How you block should not be important in the test, only that you block for some significant time.
Although from your code it is not clear how winWaitActiveIsCancelled and continueWinWaitActive contribute, I suspect they should be left out of the WinWaitActive mock. Replace them with a Thread.Sleep(500).

"Hello World" - The TDD way?

Well I have been thinking about this for a while, ever since I was introduced to TDD.
Which would be the best way to build a "Hello World" application ? which would print "Hello World" on the console - using Test Driven Development.
What would my Tests look like ? and Around what classes ?
Request: No "wikipedia-like" links to what TDD is, I'm familiar with TDD. Just curious about how this can be tackled.
You need to hide the Console behind a interface. (This could be considered to be useful anyway)
Write a Test
[TestMethod]
public void HelloWorld_WritesHelloWorldToConsole()
{
// Arrange
IConsole consoleMock = MockRepository.CreateMock<IConsole>();
// primitive injection of the console
Program.Console = consoleMock;
// Act
Program.HelloWorld();
// Assert
consoleMock.AssertWasCalled(x => x.WriteLine("Hello World"));
}
Write the program
public static class Program
{
public static IConsole Console { get; set; }
// method that does the "logic"
public static void HelloWorld()
{
Console.WriteLine("Hello World");
}
// setup real environment
public static void Main()
{
Console = new RealConsoleImplementation();
HelloWorld();
}
}
Refactor to something more useful ;-)
Presenter-View? (model doesn't seem strictly necessary)
View would be a class that passes the output to the console (simple single-line methods)
Presenter is the interface that calls view.ShowText("Hello World"), you can test this by providing a stub view.
For productivity though, I'd just write the damn program :)
A single test should suffice (in pseudocode):
IView view = Stub<IView>();
Expect( view.ShowText("Hello World") );
Presenter p = new Presenter( view );
p.Show();
Assert.IsTrue( view.MethodsCalled );
Well...I've not seen a TDD version of hello world. But, to see a similarly simple problem that's been approached with TDD and manageability in mind, you could take a look at Enterprise FizzBuzz (code). At least this will allow you to see the level of over-engineering you could possibly achieve in a hello world.
Pseudo-code:
Create a mock of something that accepts a stream.
Invoke helloworld onto this mock through some sort of dependency injection (Like a constructor argument).
Verify that the "Hello World" string was streamed into your mock.
In production code, you use the prompt instead of the mock.
Rule of thumb:
Define your success criteria in how the component interacts with other stuff, not just how it interacts with you. TDD focuses on external behavior.
Set up the environment (mocks) to handle the chain of events.
Run it.
Verify.
A very interesting question. I'm not a huge TDD user, but I'll throw some thoughts out.
I'll assume that the application that you want to test is this:
public static void Main()
{
Console.WriteLine("Hello World");
}
Now, since I can't think of any good way of testing this directly I'd break out the writing task into an interface.
public interface IOutputWriter
{
void WriteLine(string line);
}
public class ConsoleWriter : IOutputWriter
{
public void WriteLine(string line)
{
Console.WriteLine(line);
}
}
And break the application down like this
public static void Main()
{
IOutputWriter consoleOut = new ConsoleWriter();
WriteHelloWorldToOutput(consoleOut);
}
public static void WriteHelloWorldToOutput(IOutputWriter output)
{
output.WriteLine("Hello World");
}
Now you have an injection point to the method that allows you to use the mocking framework of your choice to assert that the WriteLine method is called with the "Hello World" parameter.
Problems that I have left unsolved (and I'd be interested in input):
How to test the ConsoleWriter class, I guess you still need some UI testing framework to achieve this, and if you had that then the whole problem in moot anyway...
Testing the main method.
Why I feel like I've achieved something by changing one line of untested code into seven lines of code, only one of which is actually tested (though I guess coverage has gone up)
Assuming you know unit testing, and asuming you understand the tdd "red green refactor process" (since you said you are familiar with TDD) Ill quickly explain a typical tdd thought process.
Your TDD life will be made a lot easier if you think of a particular unit of problem and every other connected things should be thought of in terms of dependencies. here is a sample
scenario:-
I want my program to display hello world on the console.
tdd thought process:-
"I think my program will start running then call the console program passing my message to it and then I expect my console program to display it on the screen"
"so i need to test that when i run my program, it should call the console program "
"now what are the dependencies? hmm I know that the console program is one of them. I don't need to worry about how the console will get the message to the screen (calling the io device, printing and all that) I just need to know that my program successfully called the console program. I need to trust that the console program works and if it doesn't, then at the moment i am not responsible for testing and making sure it works. the responsibility I want to test is that my program when it starts up calls the console program. "
"but i don't even know exactly what console program to call. well I know of System.console.Writeline (concrete implementation) but then this may change in future due to change in requirements, so what do i do?"
"Well , I will depend on interface (or abstraction) rather than concrete implementation, then i can create a fake console implementing the interface which i can test against"
public interface Iconsole
{
void WriteToConsole(string msg);
}
public class FakeConsole : Iconsole
{
public bool IsCalled = false;
public void WriteToConsole(string msg)
{
IsCalled = true;
}
}
I have put IsCalled member whose "state" will change if whe ever the console program is called
OK I know it sounds like a long thought process but it does pay off. Tdd forces you to think before codeing which is better then coding before thinking
At the end of the day, you may then come up with something like the following way to invoke your program:
var console = new FakeConsole();
console.IsCalled = false;
my_program program = new my_program(console);
program.greet();
I passed console to my_program so that my_program will use console to write our message to the screen.
and my my_program may look like this:
public class my_program
{
Iconsole _consol;
public my_program(Iconsole consol)
{
if (consol != null)
_consol = consol;
}
public void greet()
{
_consol.WriteToConsole("Hello world");
}
}
the final unit test will then be :-
[TestMethod]
public void myProgramShouldDisplayHelloWorldToTheConsole()
{
//arrange
var console = new FakeConsole();
console.IsCalled = false;
my_program program = new my_program(console);
//act
program.greet();
//assert
Assert.AreEqual(true, console.IsCalled, " console was not called to display the greeting");
}
In java you could capture ("redirect") the System.out stream and read its contents. I'm sure the same could be done in C#. It's only a few lines of code in java, so I'm sure it won't be much more in C#
I really have to object to the question! All methodologies have their place, and TDD is good in a lot of places. But user interfaces is the first place I really back off from TDD. This, in my humble opinion is one of the best justifications of the MVC design pattern: test the heck out of your models and controller programmatically; visually inspect your view. What you're talking about is hard-coding the data "Hello World" and testing that it makes it to the console. To do this test in the same source language, you pretty much have to dummy the console object, which is the only object that does anything at all.
Alternately, you can script your test in bash:
echo `java HelloWorldProgram`|grep -c "^Hello World$"
A bit difficult to add to a JUnit test suite, but something tells me that was never the plan....
I agree with David Berger; separate off the interface, and test the model. It seems like the "model" in this case is a simple class that returns "Hello, world!". The test would look like this (in Java):
Greeter greeter = new Greeter();
assertEquals("Hello World!", greeter.greet());
I've created a write up of solving Hello World TDD style at http://ziroby.wordpress.com/2010/04/18/tdd_hello_world/ .
I guess something like this:
using NUnit.Framework;
using System.Diagnostics;
[TestFixture]
public class MyTestClass {
[Test]
public void SayHello() {
string greet = "Hello World!";
Debug.WriteLine(greet);
Assert.AreEqual("Hello World!", greet);
}
}

Categories

Resources