Understanding Dispose method for in-memory database - c#

I have this base class for all my unit tests that spins up an in-memory database
public abstract class TestWithSqlite : IDisposable
{
private const string InMemoryConnectionString = "DataSource=:memory:";
private readonly SqliteConnection _connection;
protected readonly ToDoDbContext DbContext;
protected TestWithSqlite()
{
_connection = new SqliteConnection(InMemoryConnectionString);
_connection.Open();
var options = new DbContextOptionsBuilder<ToDoDbContext>()
.UseSqlite(_connection)
.Options;
DbContext = new ToDoDbContext(options);
DbContext.Database.EnsureCreated();
}
public void Dispose()
{
_connection.Close();
}
}
My question is: If I call DbContext.something in one of my tests, is it the Dispose method that ensures that this instance of the database is closed when the test ends? So that for the next test when I call DbContext again, its a new instance?

Every unit test should have a new DbContext. You don't want any dependencies between tests. Therefore, calling dispose at the end of a test is correct.

The xUnit documentation describes this. So the class containing your tests could implement IDisposable. By default, xUnit will run every method in your test class in isolation, and will call Dispose, so any object instances are unique per test.
If you want to share object instances, you can use fixtures, but it sounds like you want isolation between your tests, which is there by default.
So if you now directly add test methods to the class in your question, the context should be unique for each test. You should be able to test that by putting breakpoints in your test methods (or Dispose) and then debug the tests, to see what happens.

When I take a look into "SQLite.Net Help the document states:
Dispose - Disposes and finalizes the connection, if applicable.
In other words all SQLiteConnection Members become invalid, all resources are released.
Regards
Martin

Since your test class is used for multiple tests, having a base class for things that should be unique to one test is a bad idea.
You want to control exactly when and how a database is opened and when and how it's removed from memory.
That means your class should not be inherited, but rather used by each single test.
Make your class a normal class that can be instantiated and will provide a DBContext through a method or property. Open and create the database in the constructor, close and remove the database in the dispose method as you already do.
Your test should start with a using block that instantiates this class.
Inside the using block your tests can use the class and the DBContext it provides. the using block will take care of the dispose, no matter how you leave your test (the code might throw an exception after all).
I don't remember how it worked, but I think you can give names to the in-memory databases. It might be a good idea to do so, otherwise you will have to make sure your tests never run in parallel.

Related

How to refactor to avoid using a Shim?

I'm pretty new to Unit Testing and am exploring the Microsoft Fakes framework - primarily because it's free and it allows me to mock SharePoint objects easily with the Emulators package. I've seen various mentions on SO and elsewhere that Shims are evil and I more or less understand why. What I don't get is how to avoid them in one specific case - in other words, "how should I refactor my code to avoid having to use shims?"
For the code in question, I have a JobProcessor object that has properties and methods, some of which are private as they should only be called from the public Execute method. I want to test that when Execute is called and there is a Job available that its Process method is called as I need to do some extra logging.
Here's the relevant code:
//in system under test - JobProcessor.cs
private IJob CurrentJob { get; set; }
public void Execute()
{
GetJobToProcess(); //stores Job in CurrentJob property if found
if (ShouldProcessJob){
CurrentJob.ProcessJob();
}
}
I need to do some extra things if ProcessJob is called from a test, so I set up a Stub in my Test Method to do those extra things:
StubIJob fakeJob = new StubIJob(){
ProcessJob = () =>{
//do my extra things here
}
};
I'm testing the ProcessJob method itself elsewhere so I don't care that it doesn't do anything but my extra stuff here. As I understand things, I now need to set up a Shim to have the private method GetJobsToProcess from JobProcessor (my system under test) return my fake job so that my stubbed method is called:
processor = new JobProcessor();
ShimJobProcessor.AllInstances.GetJobToProcess = (#this) =>{
var privateProcessor = new PrivateObject(processor);
privateProcessor.SetProperty("CurrentJob", fakeJob); //force my test Job to be processed so the Stub is used
};
In this case, how should I avoid using the Shim? Does it matter?
Thanks.
This is a case where rather than using a shim or stub, I'd just make the method return a boolean to notify whether or not the inner call has happened.
The problem with using fakes there is that you're assuming that some method of some object is called, which the test should not know. Tests should be dumb, and only see the outside of the code. Tests, like any other code, should not care how a value was reached, just that it is correct.
However, your code has another issue as well. You're getting some unknown object and using it within the same scope. You should remove the call to GetJobToProccess from Execute.
It's the principle of Dependency Injection: a method should not spin up and hide it's dependencies; if it depends on an object, that object should be possible to change freely or be passed in. The exact implementation of the job should not matter to the execute method, and that, along with the naming, implies that you should not be getting that object and executing it in the same call.

How to avoid passing a context reference among classes

Dynamics CRM 2011 on premise. (But this problem exists in many situations away from Dynamics CRM.)
CRM plugins have an entry point:
void IPlugin.Execute (IServiceProvider serviceProvider)
(http://msdn.microsoft.com/en-us/library/microsoft.xrm.sdk.iplugin.execute.aspx)
serviceProvider is a reference to the plugin execution context. Anything useful that a plugin does requires accessing serviceProvider, or a member of it.
Some plugins are large and complex and contain several classes. For example, I'm working on a plugin that has a class which is instantiated multiple times. This class needs to use serviceProvider.
One way to get access to serviceProvider from all the classes that need it would be to add a property to all those classes and then to set that property. Or to add properties for the parts of serviceProvider that each class needs. Either of these approaches would result in lots of duplicate code.
Another approach would be to have a global variable in the scope of the thread. However, according to http://msdn.microsoft.com/en-us/library/cc151102.aspx one "should not use global class variables in plug-ins."
So what is the best way to have access to serviceProvider without passing it around everywhere?
P.S. If an example helps, serviceProvider provides access to a logging object. I want almost every class to log. I don't want to pass a reference to the logging object to every class.
That's not quite what the warning in the documentation is getting at. The IServiceProvider isn't a global variable in this context; it's a method parameter, and so each invocation of Execute gets its own provider.
For improved performance, Microsoft Dynamics CRM caches plug-in instances. The plug-in's Execute method should be written to be stateless because the constructor is not called for every invocation of the plug-in. In addition, multiple threads could be running the plug-in at the same time. All per invocation state information is stored in the context. This means that you should not use global class variables in plug-ins [Emphasis mine].
There's nothing wrong with passing objects from the context to helper classes which need them. The warning advises against storing something in a field ("class variable") on the plugin class itself, which may affect a subsequent call to Execute on the same instance, or cause concurrency problems if Execute is called by multiple threads on the same instance simultaneously.
Of course, this "globalness" has to be considered transitively. If you store anything in either the plugin class or in a helper class in any way that multiple calls to Execute can access (using fields on the plugin class or statics on either plugin or helper classes, for example), you leave yourself open to the same problem.
As a separate consideration, I would write the helper classes involved to accept types as specific to their function as possible - down to the level of individual entities - rather than the entire IServiceProvider. It's much easier to test a class which needs only an EntityReference than one which needs to have an entire IServiceProvider and IPluginExecutionContext mocked up.
On global variables vs injecting values required by classes
You're right, this is something that comes up everywhere in object-oriented code. Take a look at these two implementations:
public class CustomEntityFrubber
{
public CustomEntityFrubber(IOrganizationService service, Guid entityIdToFrub)
{
this.service = service;
this.entityId = entityIdToFrub;
}
public void FrubTheEntity()
{
// Do something with service and entityId.
}
private readonly IOrganizationService service;
private readonly Guid entityId;
}
// Initialised by the plugin's Execute method.
public static class GlobalPluginParameters
{
public static IOrganizationService Service
{
get { return service; }
set { service = value; }
}
public static Guid EntityIdToFrub
{
get { return entityId; }
set { entityId = value; }
}
[ThreadStatic]
private static IOrganizationService service;
[ThreadStatic]
private static Guid entityId;
}
public class CustomEntityFrubber
{
public FrubTheEntity()
{
// Do something with the members on GlobalPluginParameters.
}
}
So assume you've implemented something like the second approach, and now you have a bunch of classes using GlobalPluginParameters. Everything is going fine until you discover that one of them is occasionally failing because it needs an instance of IOrganizationService obtained by calling CreateOrganizationService(null), so it accesses CRM as the system user rather than the calling user (who doesn't always have the required privileges).
Fixing the second approach requires you to add another field to your growing list of global variables, remembering to make it ThreadStatic to avoid concurrency problems, then changing the code of CustomEntityFrubber to use the new SystemService property. You have tight coupling between all these classes.
Not only that, all these global variables hang around between plugin invocations. If your code has a bug that somehow bypasses the assignment of GlobalPluginParameters.EntityIdToFrub, suddenly your plugin is inexplicably operating on data that wasn't passed to it by the current call to Execute.
It's also not obvious exactly which of these global variables the CustomEntityFrubber requires, unless you read its code. Multiply that by however many helper classes you have, and maintaining this code starts to become a headache. "Now, does this object need me to have set Guid1 or Guid2 before I call it?" On top of that, the class itself can't be sure that some other code won't go and change the values of global variables it was relying on.
If you used the first approach, you simply pass in a different value to the CustomEntityFrubber constructor, with no further code changes needed. Furthermore, there's no stale data hanging around. The constructor makes it obvious which dependencies the class has, and once it has them, it can be sure that they don't change except in ways they were designed for.
As you say, you shouldn't put a member variable on the plugin since instances are cached and reused between requests by the plugin pipeline.
The approach I take is to create a class that perform the task you need and pass a modified LocalPluginContext (making it a public class) provided by the Developer Toolkit (http://msdn.microsoft.com/en-us/library/hh372957.aspx) on the constructor. Your class then can store the instance for the purposes of executing it's work just in the same way you would with any other piece of code. You are essentially de-coupling from the restrictions imposed by the Plugin framework. This approach also makes it easier to unit test since you only need to provide the execution context to your class rather than mocking the entire plugin pipeline.
It's worth noting that there is a bug in the automatically generated Plugin.cs class in the Developer Toolkit where it doesn't set the ServiceProvider property - At the end of the constructor of the LocalPluginContext add the line:
this.ServiceProvider = serviceProvider;
I have seen some implementations of an IoC approach in Plugins - but IMHO it makes the plugin code way too complex. I'd recommend making your plugins lean and simple to avoid threading/performance issues.
There are multiple things I would worry about in this design request (not that it's bad, just that one should be aware of, and anticipate).
IOrganizationService is not multi-thread safe. I'm assuming that other aspects of the IServiceProvider are not as well.
Testing things at an IServiceProvider level is much more complicated due to the additional properties that have to be mocked
You'd need a method for handle logging if you ever decided to call logic that is currently in your plugin, outside of the plugin (e.g. a command line service).
If you don't want to be passing the object around everywhere, the simple solution is to create a static property on some class that you can set it upon plugin execution, and then access from anywhere.
Of course now you have to handle issue #1 from above, so it'd have to be a singleton manager of some sort, that would probably use the current thread's id to set and retrieve the value for that thread. That way if the plugin is fired twice, you could retrieve the correct context based on your currently executing thread. (Edit Rather than some funky thread id lookup dictionary, #shambulator's ThreadStatic property should work)
For issue #2, I wouldn't be storing the IServiceProvider as is, but split up it's different properties (e.g. IPluginExecutionContext, IOrganizationService, etc)
For issue #3, it might make sense to store an action or a function in your manager rather than the object values themselves. For example, if rather than storing the IPluginExecutionContext, store a func that accepts a string to log and uses the IPlurginExeuctionContext to log. This allows other code to setup it's own logging, without being dependent on executing from within a plugin.
I haven't made any of these plugins myself, but I would treat the IServiceProvider like an I/O device.
Get the data you need from it and convert that data to format that suits your plugin. Use the transformed data to set up the other classes. Get the the output from the other classes and then translate back to terms the IServiceProvider can understand and use.
Your input and output are dependent on the IServiceProvider, but the processing doesn't have to be.
From Eduardo Avaria at http://social.microsoft.com/Forums/en-US/f433fafa-aff7-493d-8ff7-5868c09a9a9b/how-to-avoid-passing-a-context-reference-among-classes
Well, as someone at SO already told you, the global variables restriction is there cause the plugin won't instantiate again if it's called within the same context (the object context and probably other environmental conditions), so any custom global variable would be shared between that instances, but since the context will be the same, there's no problem in assigning it to a global variable if you want to share it between a lot of classes.
Anyways, I'd rather pass the context on the constructors and share it have a little more control over it, but that's just me.

C# Unit Test - To mock, stub or use explicit implementation

This has been discussed a number of times before, but the merits in the below examples aren't obvious, so please bear with me.
I'm trying to decide whether to use mock implementations in my unit tests and am undecided given the following two examples, the first using NSubstitute for mocking and the second using a SimpleInjector (Bootstrapper object) resolved implementation.
Essentially both are testing for the same thing, that the Disposed member is set to true when the .Dispose() method is called (see implementation of method at the bottom of this post).
To my eye, the second method makes more sense for regression testing as the mock proxy explicitly sets the Disposed member to be true in the first example, whereas it is set by the actual .Dispose() method in the injected implementation.
Why would you suggest I choose one over the other for verifying that the method behaves as expected? I.e. that the .Dispose() method is called, and that the Disposed member is set correctly by this method.
[Test]
public void Mock_socket_base_dispose_call_is_received()
{
var socketBase = Substitute.For<ISocketBase>();
socketBase.Disposed.Should().BeFalse("this is the default disposed state.");
socketBase.Dispose();
socketBase.Received(1).Dispose();
socketBase.Disposed.Returns(true);
socketBase.Disposed.Should().BeTrue("the ISafeDisposable interface requires this.");
}
[Test]
public void Socket_base_is_marked_as_disposed()
{
var socketBase = Bootstrapper.GetInstance<ISocketBase>();
socketBase.Disposed.Should().BeFalse("this is the default disposed state.");
socketBase.Dispose();
socketBase.Disposed.Should().BeTrue("the ISafeDisposable interface requires this.");
}
For reference the .Dispose() method is simply this:
/// <summary>
/// Performs application-defined tasks associated with freeing, releasing, or resetting unmanaged resources.
/// </summary>
public void Dispose()
{
Dispose(true);
GC.SuppressFinalize(this);
}
/// <summary>
/// Releases unmanaged and - optionally - managed resources.
/// </summary>
/// <param name="disposeAndFinalize"><c>true</c> to release both managed and unmanaged resources; <c>false</c> to release only unmanaged resources.</param>
protected void Dispose(bool disposeAndFinalize)
{
if (Disposed)
{
return;
}
if (disposeAndFinalize)
{
DisposeManagedResources();
}
DisposeUnmanagedResources();
Disposed = true;
}
Cheers
Both test methods seem quite bizarre to me. With the first method, you don't seem to test anything (or I might be misunderstanding what NSubstitute does), because you just mock the ISocketBase interface (that has no behavior to test) and start testing that mock object instead of the real implementation.
The second method is bad as well, since you should NOT use any DI container inside your unit tests. This only makes things more complicated because:
You now use shared state that all tests use, which makes all tests depend on each other (tests should run in isolation).
The container bootstrap logic will get very complex, because you want to insert different mocks for different tests, and again, no objects shared between tests.
Your tests got an extra dependency on a framework or facade that just doesn't have exist anyway. In this sense you're simply making your tests more complicated. It might be just a little bit more complicated, but it's an extra complication nonetheless.
Instead, what you should do is always create the class under test (SUT) inside the unit test (or a test factory method) itself. You might still want to create the SUTs dependencies using a mocking framework but this is optional. So, IMO the test should look something like this:
[Test]
public void A_nondisposed_Socket_base_should_not_be_marked_dispose()
{
// Arrange
Socket socket = CreateValidSocket();
// Assert
socketBase.Disposed.Should().BeFalse(
"A non-disposed socket should not be flagged.");
}
[Test]
public void Socket_base_is_marked_as_disposed_after_calling_dispose()
{
// Arrange
Socket socket = CreateValidSocket();
// Act
socketBase.Dispose();
// Assert
socketBase.Disposed.Should().BeTrue(
"Should be flagged as Disposed.");
}
private static Socket CreateValidSocket()
{
return new Socket(
new FakeDependency1(), new FakeDependency2());
}
Note that I split up your single test into 2 tests. That Disposed should be false before dispose is called is not a precondition for that test to run; it's a requirement of the system to work. In other words, you need to be explicit about this and need this second test.
Also note the use of the CreateValidSocket factory method that is reused over multiple tests. You might have multiple overloads (or optional parameters) for this method when other tests check other parts of the class that require more specific fake or mock objects.
You are concerned with too much. This test is testing weather or not a given implementation is correctly disposing and as such your test should reflect that. See the pseudo code below. The trick to non brittle tests is to only test the absolute minimum required to satisfy the test.
public class When_disposed_is_called()
{
public void The_object_should_be_disposed()
{
var disposableObjects = someContainer.GetAll<IDisposable>();
disposableObjects.ForEach(obj => obj.Dispose());
Assert.False(disposableObject.Any(obj => obj.IsDisposed == false));
}
}
As you can see I fill some dependency container with all the objects in my concern that implement IDisposable. I might have to mock them or do other things but that is not the concern of the test. Ultimately it is only concerned with validating that when something is disposed it should in fact be disposed.

Linq2Sql static Datacontext in DAL

I am new to Linq to sql. And my question is simple.
Is it a good idea to have DataContext as public static member in DAL to act as singleton?
It is not really good idea to keep DataContext as singleton, for small application, you might not see any consequences, but if your web application which has many users to access, it will lead to memory leak. Why?
DataContext basically implements Unit Of Work behind the scene which has internal cache inside to track changes of entities and avoid round trip to database in one business transaction. Keeping DataContext for long time as static, it means internal cache will be increasing for the time being and is not released properly.
DataContext should be kept in one business transaction and release as soon as possible. Best practice for web application is to keep DataContext as per request. You are also able to make use of IoC Container, most of IoC Container support this.
I have also experienced one thing while using shared datacontext in DAL. Suppose there are two users A and B. If User A starts and transaction then user B can commit changes made by user A which is a side effect of using static DataContext.
I generally try to group functionality together for a Data Access class and make that class IDisposable.
Then you Create your DataContext in your constructor and in your dispose method you run your .dispose() call on the DataContext.
So then when you need something from that class you can wrap it in a using statement, and make a bunch of calls all using the same DataContext.
It's pretty much the same effect as using a Static DataContext, but means you don't forget to close down the connection, and it seems a bit more OO than making things static.
public class MyDataAccessClass: IDisposable
{
private readonly DbDataContext _dbContext;
public MyDataAccessClass()
{
_dbContext = new DbDataContext ();
}
public void Dispose()
{
_dbContext.Dispose();
}
public List<CoolData> GetStuff()
{
var d = _dbContext.CallStuff();
return d;
}
}
Then in your class
using(var d = new MyDataAccessClass())
{
//Make lots of calls to different methods of d here and you'll reuse your DataContext
}
I recommend you to read about 'unit of work' pattern. I.e. http://stuartharris4.blogspot.com/2008/06/working-together-linq-to-sql.html
You most definitely should not have a static DataContext in a multi-threaded application such as ASP.NET. The MSDN documentation for DataContext states that:
Any instance members are not guaranteed to be thread safe.

Is it a code smell for one method to depend on another?

I am refactoring a class so that the code is testable (using NUnit and RhinoMocks as testing and isolations frameworks) and have found that I have found myself with a method is dependent on another (i.e. it depends on something which is created by that other method). Something like the following:
public class Impersonator
{
private ImpersonationContext _context;
public void Impersonate()
{
...
_context = GetContext();
...
}
public void UndoImpersonation()
{
if (_context != null)
_someDepend.Undo();
}
}
Which means that to test UndoImpersonation, I need to set it up by calling Impersonate (Impersonate already has several unit tests to verify its behaviour). This smells bad to me but in some sense it makes sense from the point of view of the code that calls into this class:
public void ExerciseClassToTest(Impersonator c)
{
try
{
if (NeedImpersonation())
{
c.Impersonate();
}
...
}
finally
{
c.UndoImpersonation();
}
}
I wouldn't have worked this out if I didn't try to write a unit test for UndoImpersonation and found myself having to set up the test by calling the other public method. So, is this a bad smell and if so how can I work around it?
Code smell has got to be one of the most vague terms I have ever encountered in the programming world. For a group of people that pride themselves on engineering principles, it ranks right up there in terms of unmeasurable rubbish, and about as useless a measure, as LOCs per day for programmer efficiency.
Anyway, that's my rant, thanks for listening :-)
To answer your specific question, I don't believe this is a problem. If you test something that has pre-conditions, you need to ensure the pre-conditions have been set up first for the given test case.
One of the tests should be what happens when you call it without first setting up the pre-conditions - it should either fail gracefully or set up it's own pre-condition if the caller hasn't bothered to do so.
Well, there is a bit too little context to tell, it looks like _someDepend should be initalized in the constructor.
Initializing fields in an instance method is a big NO for me. A class should be fully usable (i.e. all methods work) as soon as it is constructed; so the constructor(s) should initialize all instance variables. See e.g. the page on single step construction in Ward Cunningham's wiki.
The reason initializing fields in an instance method is bad is mainly that it imposes an implicit ordering on how you can call methods. In your case, TheMethodIWantToTest will do different things depending on whether DoStuff was called first. This is generally not something a user of your class would expect, so it's bad :-(.
That said, sometimes this kind of coupling may be unavoidable (e.g. if one method acquires a resource such as a file handle, and another method is needed to release it). But even that should be handled within one method if possible.
What applies to your case is hard to tell without more context.
Provided you don't consider mutable objects a code smell by themselves, having to put an object into the state needed for a test is simply part of the set-up for that test.
This is often unavoidable, for instance when working with remote connections - you have to call Open() before you can call Close(), and you don't want Open() to automatically happen in the constructor.
However you want to be very careful when doing this that the pattern is something readily understood - for instance I think most users accept this kind of behaviour for anything transactional, but might be surprised when they encounter DoStuff() and TheMethodIWantToTest() (whatever they're really called).
It's normally best practice to have a property that represents the current state - again look at remote or DB connections for an example of a consistently understood design.
The big no-no is for this to ever happen for properties. Properties should never care what order they are called in. If you have a simple value that does depend on the order of methods then it should be a parameterless method instead of a property-get.
Yes, I think there is a code smell in this case. Not because of dependencies between methods, but because of the vague identity of the object. Rather than having an Impersonator which can be in different persona states, why not have an immutable Persona?
If you need a different Persona, just create a new one rather than changing the state of an existing object. If you need to do some cleanup afterwards, make Persona disposable. You can keep the Impersonator class as a factory:
using (var persona = impersonator.createPersona(...))
{
// do something with the persona
}
To answer the title: having methods call each other (chaining) is unavoidable in object oriented programming, so in my view there is nothing wrong with testing a method that calls another. A unit test can be a class after all, it's a "unit" you're testing.
The level of chaining depends on the design of your object - you can either fork or cascade.
Forking:
classToTest1.SomeDependency.DoSomething()
Cascading:
classToTest1.DoSomething() (which internally would call SomeDependency.DoSomething)
But as others have mentioned, definitely keep your state initialisation in the constructor which from what I can tell, will probably solve your issue.

Categories

Resources