I have conflicts between some of my unit tests.
I have a class with a static event, when I run each test one at a time I have no problem, but when I run all tests in a batch, the tests which are firing the event will crash because of the registred listeners (which have registred to the event in the previous tests).
Obviously I don't want the event handlers to be executed when the tested class fires the event, so what is the best solution ?
I know I wouldn't have the problem if the event was not static, but I would prefer not have to redesign this if there is another solution.
Detaching all the event listeners before running the test could be a solution, but I think it is impossible to do it from outside of the class (which seems normal cause we don't want a client being able to detach all other listeners), and doing it from the inside would means that I have to add a method in the class for the only purpose of the unit tests, which is as bad practice.
Is there a way to run the test in an isolation mode that would prevent other previous tests to impact it ? Like if it was run in a completely separated process so that I don't get the same reference to the static event ? (But I still need to be able to executes all tests in batch with a simple click)
Thanks for your help and ideas !
For information I am using Visual Studio 2012 unit test framework.
Somewhere within each individual test you are going to attach yourself to the static event. So simply before the assert simply detach yourself or better guard the detaching within a finally block:
[TestCase]
public void MyTestDetachingBeforeAssert()
{
bool finishedCalled = false;
var eventListener = new EventHandler((sender, e) => finishedCalled = true);
MyClass.Finished += eventListener;
// Can lead to detach problems if this throws an exception!
MyClass.DoSomething();
MyClass.Finished -= eventListener;
Assert.That(finishedCalled);
}
[TestCase]
public void MyTestDetachingInFinally()
{
bool finishedCalled = false;
var eventListener = new EventHandler((sender, e) => finishedCalled = true);
try
{
MyClass.Finished += eventListener;
MyClass.DoSomething();
Assert.That(finishedCalled);
}
finally
{
MyClass.Finished -= eventListener;
}
}
Related
The Problem:
I am using asynchronous event handling in a class that invokes an event about 60 times per second. If I run the software for several hours I find that it eventually gets to a state where the event stops invoking the assigned delegate without seeing any exceptions or user input.
Here is the distilled version of the event code I am using:
class DataSource
{
public event EventHandler<DataReadyEventArgs> DataReady;
private void OnDataReady()
{
EventHandler<DataReadyEventArgs> evt = DataReady;
if (evt != null)
{
Trace.WriteLine("Invoking DataReady...");
DataReadyEventArgs args = new DataReadyEventArgs();
evt.BeginInvoke(this, args, new AsyncCallback(DataReadyHandled), null);
}
}
private void DataReadyHandled(IAsyncResult result)
{
AsyncResult aresult = result as AsyncResult;
EventHandler<DataReadyEventArgs> evt = aresult.AsyncDelegate as EventHandler<DataReadyEventArgs>;
if (evt != null)
{
try
{
evt.EndInvoke(result);
Trace.WriteLine("DataReady was handled.");
}
catch
{ }
}
}
}
class DataConsumer
{
public DataConsumer(DataSource src)
{
src.DataReady += HandleDataReady;
}
public void HandleDataReady(object sender, DataReadyEventArgs e)
{
Trace.WriteLine("Handling DataReady");
}
}
To avoid writing out further code to give the full picture, you can also make the following assumptions:
There is always a strong reference to the DataSource instance so it is not being garbage collected while the DataConsumer is expecting an event
The DataConsumer attaches its handler method once and does not remove it until the instance is garbage collected.
Here is the weird part; When I attach a debugger to the software in its broken state, I can see that the evt variable in OnDataReady() references an EventHandler instance and is not null. It actually calls BeginInvoke but the HandleDataReady method is never entered. The debugger shows that the EventHandler base class's Method property is null though. I can't seem to figure out how it is possible that the EventHandler is being changed.
The (First) Question:
What can cause the event in the above code to stop being invoked after several hours of runtime?
My Best Guess:
Could something completely unrelated in my application be corrupting the memory for the EventHandler? I am using both native and managed c++ libraries.
Update:
My best guess turns out to be unlikely, as I have run some other tests that make me reasonably sure the native libraries are not leaking or corrupting memory.
After reproducing the problem another time with logging, I can see that the event is not stopping entirely but being invoked on a ~19 hour interval! Obviously there is a long-running operation or blocking somewhere in the handler delegate.
To be certain that it's my code and not .NET, I have an updated question...
The (Second) Question:
Is there any behaviour built in to .NET (4.5) or its garbage collector that can block invocation of an EventHandler?
I am using VS2010, writing unit tests with MSTest. My project uses WPF, MVVM and the PRISM framework. I am also using Moq to mock interfaces.
I am testing the interaction between a command and a selected item in a list. The interaction is encapsulated in a ViewModel according to the MVVM pattern. Basically, when the SelectedDatabase is set, I want the Command to raise CanExecute. I have written this test for the behaviour:
public void Test()
{
var databaseService = new Mock<IDatabaseService>();
var databaseFunctionsController = new Mock<IDatabaseFunctionsController>();
// Create the view model
OpenDatabaseViewModel viewModel
= new OpenDatabaseViewModel(databaseService.Object, databaseFunctionsController.Object);
// Mock up the database and its view model
var database = TestHelpers.HelpGetMockIDatabase();
var databaseViewModel = new DatabaseViewModel(database.Object);
// Hook up the can execute changed event
var resetEvent = new AutoResetEvent(false);
bool canExecuteChanged = false;
viewModel.OpenDatabaseCommand.CanExecuteChanged += (s, e) =>
{
resetEvent.Set();
canExecuteChanged = true;
};
// Set the selected database
viewModel.SelectedDatabase = databaseViewModel;
// Allow the event to happen
resetEvent.WaitOne(250);
// Check that it worked
Assert.IsTrue(canExecuteChanged,
"OpenDatabaseCommand.CanExecuteChanged should be raised when SelectedDatabase is set");
}
On the OpenDatabaseViewModel, the SelectDatabase property is as follows:
public DatabaseViewModel SelectedDatabase
{
get { return _selectedDatabase; }
set
{
_selectedDatabase = value;
RaisePropertyChanged("SelectedDatabase");
// Update the can execute flag based on the save
((DelegateCommand)OpenDatabaseCommand).RaiseCanExecuteChanged();
}
}
And also on the viewmodel:
bool OpenDatabaseCanExecute()
{
return _selectedDatabase != null;
}
TestHelpers.HelpGetMockIDatabase() just gets a mock IDatabase with some properties set.
This test passes when I run the test from VS2010, but fails when executed as part of an automated build on the server. I put in the AutoResetEvent to try to fix the problem, but it's had no effect.
I discovered that the automated tests were using the noisolation flag in the MSTest command line, so I removed that. However, that produced a 'pass' once, but a 'fail' the next.
I think I am missing something important in all of this, but I can't figure out what it is. Can anyone help by telling me what I'm doing wrong?
Updated
The only other remaining places where your code could fail is in these two lines in your snippet for the SelectedDatabase property.
RaisePropertyChanged("SelectedDatabase");
// Update the can execute flag based on the save
((DelegateCommand)OpenDatabaseCommand).RaiseCanExecuteChanged();
There are others who have had some problems with RaisePropertyChanged() and it's use of magic strings; but this is probably not your immediate problem. Nonetheless, you can look at these links if you want to go down the path of removing the magic string dependency.
WPF, MVVM, and RaisePropertyChanged # WilberBeast
MVVM - RaisePropertyChanged turning code into a mess
The RaiseCanExecuteChanged() method is the other suspect, and looking up documentation in PRISM reveals that this method expects to dispatch events on the UI thread. From mstest, there are no guarantees that a UI thread is being used to dispatch tests.
DelegateCommandBase.RaiseCanExecuteChanged # MSDN
I recommend you add a try/catch block around it and see if any exceptions are thrown when RaiseCanExecuteChanged() is called. Note the exceptions thrown so that you can decide how to proceed next. If you absolutely need to test this event dispatch, you may consider writing a tiny WPF-aware app (or perhaps a STAThread console app) that runs the actual test and exits, and having your test launch that app to observe the result. This will isolate your test from any threading concerns that could be caused by mstest or your build server.
Original
This snippet of code seems suspect. If your event fires from another thread, the original thread may exit the wait first before your assignment, causing your flag to be read with a stale value.
viewModel.OpenDatabaseCommand.CanExecuteChanged += (s, e) =>
{
resetEvent.Set();
canExecuteChanged = true;
};
Consider reordering the lines in the block to this:
viewModel.OpenDatabaseCommand.CanExecuteChanged += (s, e) =>
{
canExecuteChanged = true;
resetEvent.Set();
};
Another issue is that you don't check if your wait was satisfied. If 250ms did elapse without a signal, your flag will be false.
See WaitHandle.WaitOne to check what return values you'll receive and update this section of code to handle the case of an unsignaled exit.
// Allow the event to happen
resetEvent.WaitOne(250);
// Check that it worked
Assert.IsTrue(canExecuteChanged,
"OpenDatabaseCommand.CanExecuteChanged should be raised when SelectedDatabase is set");
I have found an answer to explain what was going on with this unit test. There were other complicating factors that I didn't realise were significant at the time. I didn't include these details in my original question because I did not think they were relevant.
The view model described in the question of code is part of a project that is using integration with WinForms. I am hosting a PRISM shell as a child of an ElementHost. Following the answer to the question on stackoverflow How to use Prism within an ElementHost, this is added to create an appropriate Application.Current:
public class MyApp : System.Windows.Application
{
}
if (System.Windows.Application.Current == null)
{
// create the Application object
new MyApp();
}
The above code is not exercised by the unit test in question. However, it was being exercised in other unit tests that were being run beforehand, and all were run together using the /noisolation flag with MSTest.exe.
Why should this matter? Well, buried in the PRISM code that is called as a consequence of
((DelegateCommand)OpenDatabaseCommand).RaiseCanExecuteChanged();
in the internal class Microsoft.Practices.Prism.Commands.WeakEventHandler is this method:
public static DispatcherProxy CreateDispatcher()
{
DispatcherProxy proxy = null;
#if SILVERLIGHT
if (Deployment.Current == null)
return null;
proxy = new DispatcherProxy(Deployment.Current.Dispatcher);
#else
if (Application.Current == null)
return null;
proxy = new DispatcherProxy(Application.Current.Dispatcher);
#endif
return proxy;
}
It then uses the dispatcher to call the event handler in question:
private static void CallHandler(object sender, EventHandler eventHandler)
{
DispatcherProxy dispatcher = DispatcherProxy.CreateDispatcher();
if (eventHandler != null)
{
if (dispatcher != null && !dispatcher.CheckAccess())
{
dispatcher.BeginInvoke((Action<object, EventHandler>)CallHandler, sender, eventHandler);
}
else
{
eventHandler(sender, EventArgs.Empty);
}
}
}
So it attempts to dispatch the event on the UI thread on the current application if there is one. Otherwise it just calls the eventHandler. For the unit test in question, this led to the event being lost.
After trying many different things, the solution I settled on was just to split up the unit tests into different batches, so the unit test above is run with Application.Current == null.
I'm running ncrunch, in a new MVC 4 solution in VS2012 using nunit and ninject.
When I first open the solution all 50 or so test run and pass successfully.
After I make any code change (even just a added empty space) ncrunch reports that most of my unit test fail. The same thing happens if I press the 'run all tests' in the ncrunch window.
But if you hit the 'Run all tests visible here' button all 50 test pass again and ncrunch reports everything is fine.
Also when you run each test individually they pass every time.
When they do fail they seem to be failing in my ninject setup code
Error: TestFixtureSetUp failed in ControllerTestSetup
public class ControllerTestSetup
{
[SetUp]
public void InitIntegrationTest()
{
var context = IntegrationTestContext.Instance;
context.Init();
context.NinjectKernel.Load<MediGapWebTestModule>();
}
[TearDown]
public void DisposeIntegrationTest()
{
IntegrationTestContext.Instance.Dispose();
}
}
public class IntegrationTestContext : IDisposable
{
private static IntegrationTestContext _instance = null;
private static readonly object _monitor = new object();
private IntegrationTestContext() { }
public static IntegrationTestContext Instance
{
get
{
if (_instance == null)
{
lock (_monitor)
{
if (_instance == null)
{
_instance = new IntegrationTestContext();
}
}
}
return _instance;
}
}
}
All the test also run in the resharper test runner without problems every time.
Does anyone know what could be causing this?
I guessing its something to do with the singleton lock code inside the Instance property but I'm not sure.
==============================================================================
Progress:
I was able to track this down to a error in the ninject setup method above by wrapping it in a try catch statement, and writing the error to the output window.
The exception was caused by trying to load a module more than once, even tho I definitely haven't and I don't use any type of automatic module loading.
This happen on the lines
LocalSessionFactoryModule.SetMappingAssemblies(() => new[] { typeof(ProviderMap).Assembly });
_kernel.Load<LocalSessionFactoryModule>();
_sessionFactory = _kernel.Get<ISessionFactory>();
where LocalSessionFactoryModule is the ninject module class derived for NinjectModule class.
Why is this only happening with ncrunch and what can I do to solve this issue? Is there a way to check if a module has already been loaded?
NCrunch will never execute tests concurrency within the same process, so unless you have multi-threaded behaviour inside your test logic, then it should be safe to say that this isn't an issue caused by the locking or threading over the singleton.
As you've already tried disabling parallel execution and this hasn't made a difference, I'm assuming that the problem wouldn't be caused by concurrent use of resources outside the test runner process (i.e. files on disk).
This means that the problem is almost certainly related to the sequence in which the tests are being executed. Almost all manual test runners (including Resharper) will run tests in a defined sequence from start to finish. This is good for consistency, but it can mask problems that may surface when the tests are run in an inconsistent/random order. NCrunch will execute tests in order of priority and can also reuse test processes between test runs, which can make the runtime behaviour of your tests different if they haven't been designed with this in mind.
A useful way to surface (and thus debug) sequence related issues is to try running your tests in a manually defined order by using NCrunch. If you right-click a test inside the NCrunch Tests Window, under the 'Advanced' menu you'll find an option to run a test using an existing task runner process. Try this action against several of your tests to see if you can reproduce the sequence that surfaces the problem. When it happens, you should easily be able to get a debugger onto the test and find out why it is failing.
Most sequence related problems are caused by uncleared static members, so make sure each of your tests is written in the assumption that existing state may be left behind by another test that has been run within the process. Another option is to ensure all state is fully cleared by tests on tear down (although in my opinion, this is often a less pragmatic approach).
I am maintaining some code which has two FileSystemWatcher events that makes it difficult to debug (and it has an error). So my idea is to simplify the code by making the execution sequential. Pretty much like this:
Main method
1) normal code here
2) enable event 1, let it check for files, disable it when it is done running once
3) enable event 2, let it check for files, disable it when it is done running once
Then the database logs would make more sense. I would be able to see which part of the program that is doing something wrong.
private void InitializeFileSystemWatcher()
{
this.MessageMonitor = new FileSystemWatcher(this.MessagePath, this.MessageFilter);
this.MessageMonitor.IncludeSubdirectories = true; // Recursive.
this.MessageMonitor.Created += new FileSystemEventHandler(OnMessageReceived);
this.MessageMonitor.EnableRaisingEvents = true;
}
From the main, I can set the EnableRaisingEvents=true to EnableRaisingEvents=false. Both events indexes the files in some folder and enacts a callback method.
My question is this: If the event is currently executing and I set EnableRaisingEvents=false, will it pause or continue to execute until it finishes?
If it does continue, I figure just to have a bool doRUN variable set at beginning and the end of the event as a check for the main method.
You should just detach the event handler after you check to make sure that it is working properly and then instantiate the second FileSystemWatcher.
Inside of the OnMessageReceived you could od something like
public void OnMessageRecieved(Object sender, Events e) //Not the real signature
{
MessageMonitor.Created -= OnMessageReceived();
//Do Your things
OtherMessageMonitor.Created += OnMessageReceived();
}
I'm working on a project at the moment where I need to inter-operate with code that swallows exceptions. In particular, I'm writing NUnit unit tests. There are some places where I want to embed assertions within code that gets passed as a delegate, as part of mocking a particular behavior. The problem I'm having is that the AssertionException gets swallowed by the code calling the delegate, which means the test passes, even though the test Assert failed.
Is there any way to inform NUnit that a test should fail that can't be circumvented by catching AssertionException? I can't modify the code that swallows the exceptions, as I don't have full ownership and it's already in semi-production use. I'm hoping there's a clean way to accomplish this.
The best I've come up with is something like this:
private static string _assertionFailure;
public static void AssertWrapper(Action action)
{
try
{
action();
}
catch (AssertionException ex)
{
_assertionFailure = ex.Message;
throw;
}
}
[Test]
[ExpectedException(typeof(AssertionException))]
public void TestDefeatSwallowing()
{
Action failure = () => AssertWrapper(() => Assert.Fail("This is a failure"));
EvilSwallowingMethod(failure);
if (_assertionFailure != null)
Assert.Fail(_assertionFailure);
}
private void EvilSwallowingMethod(Action action)
{
try
{
action();
}
catch
{
}
}
It works, but it's pretty ugly. I have to wrap every Assert call and I have to check at the end of every test if an assertion was swallowed.
So you're doing something like this? (this is using Moq syntax)
var dependency1 = new Mock<IDependency1>();
dependency1.Setup(d => d.CalledMethod([Args])
.Callback(TestOutArgsAndPossiblyThrow);
var objectUnderTest = new TestedObject(dependency1.Object);
objectUnderTest.MethodThatCallsIDependency1dotCalledMethod();
And you've got TestOutArgsAndPossiblyThrow encapsulated in your AssertWrapper class?
Unless that's way off, I'd say you're doing it just about right. You have execution re-entering your test at a point where you can record the state of the call to the dependency. Whether that's done via catching exceptions and analyzing them or just directly inspecting the values of the method parameters, you've just gotta do the work. And if you're swallowing exceptions inside the black box, you're going to have to monitor them before they get back into the black box.
I still say you'd be much better off with appropriate logging and notification (you don't have to notify the end users, necessarily). To #TrueWill's point - what do you do when there's an IOException or the database isn't available?
DISCUSSION EDIT
Is your scenario structured like this?
TEST -> TESTED CODE -> SWALLOWING CODE -> THROWING MOCK