I'm running integration tests with xUnit in ASP.NET, and one of which ensures that querying the data multiple times results in only 1 query to the server. If I run this test alone, then it works. If I run all the tests, then the server gets queried 0 times instead of 1 by this test. Which indicates that the result was already in the cache because of other tests.
How can I ensure the IAppCache is empty at the beginning of the test? I'm using LazyCache implementation.
My guess is that the class instance is recreated for each tests, but static data is shared; and the cache is static. I don't see any "flush" method on the cache.
As mentioned in my OP comment LazyCache afaik doesn't have a clear operation or anything native to nuke the cache. However I think there are a few options at your disposal.
Implement a method before/after each test to remove the cache entries, using Remove;
Supply a different LazyCache cache provider for the tests that doesn't persist the cache between tests
Dig into LazyCache, get the underlying cache provider and see if that has any methods to clear the cache entries
1 or 3 would be my picks. From a testing perspective, 1 means you need to know the internals of what you're testing. If it were me, I'm a bit lazy and would probably write the few lines to just nuke the cache.
By default LazyCache uses MemoryCache as the cache provider. MemoryCache doesn't have an explicit clear operation either but Compact looks like it can essentially clear the cache when the compact percentage is set to 1.0. To access it, you'll need to get the underlying MemoryCache object from LazyCache:
IAppCache cache = new CachingService();
var cacheProvider = cache.CacheProvider;
var memoryCache = (MemoryCache)cacheProvider.GetType().GetField("cache", BindingFlags.Instance | BindingFlags.NonPublic).GetValue(cacheProvider);
memoryCache.Compact(1.0);
Complete LINQPad working sample:
void Main()
{
IAppCache cache = new CachingService();
Console.WriteLine(cache.GetOrAdd("Foo", () => Foo.NewFoo, DateTimeOffset.Now.AddHours(1.0)));
var cacheProvider = cache.CacheProvider;
var memoryCache = (MemoryCache)cacheProvider.GetType().GetField("cache", BindingFlags.Instance | BindingFlags.NonPublic).GetValue(cacheProvider);
memoryCache.Compact(1.0);
Console.WriteLine(cache.Get<Foo>("Foo"));
}
public class Foo
{
public static Foo NewFoo
{
get
{
Console.WriteLine("Factory invoked");
return new Foo();
}
}
public override string ToString()
{
return "I am a foo";
}
}
This results in the following each run:
If I remove the compact invocation we get the following:
So that shows that Compact(1.0) will nuke the cache entry even with an expiry date of +1 hour.
Related
I have the following DatabaseFixture which has worked well for all tests I have created up to this point. I use this fixture for integration tests so I can make real assertions on database schema structures.
public class DatabaseFixture : IDisposable
{
public IDbConnection Connection => _connection.Value;
private readonly Lazy<IDbConnection> _connection;
public DatabaseFixture()
{
var environment = Environment.GetEnvironmentVariable("ASPNET_ENVIRONMENT") ?? "Development";
var configuration = new ConfigurationBuilder()
.SetBasePath(Directory.GetCurrentDirectory())
.AddJsonFile("AppSettings.json", optional: false, reloadOnChange: true)
.AddJsonFile($"AppSettings.{environment}.json", optional: true, reloadOnChange: true)
.Build();
_connection = new Lazy<IDbConnection>(() =>
{
var connection = new MySqlConnection(configuration["ConnectionStrings:MyDatabase"]);
connection.Open();
return connection;
});
}
public void Dispose()
{
Connection?.Dispose();
}
}
[CollectionDefinition("Database Connection Required")]
public class DatabaseConnectionFixtureCollection : ICollectionFixture<DatabaseFixture>
{
}
The problem I am facing is I now need to invoke a test method like MyDataIsAccurate(...) with each record from a table in the database. xUnit offers the [MemberData] attribute which is exactly what I need but it requires a static enumerable set of data. Does xUnit offer a clean way of sharing my DatabaseFixture connection instance statically or do I just need to suck it up and expose a static variable of the same connection instance?
[Collection("Database Connection Required")]
public class MyTests
{
protected DatabaseFixture Database { get; }
// ERROR: Can't access instance of DatabaseFixture from static context...
public static IEnumerable<object[]> MyData => Database.Connection.Query("SELECT * FROM table).ToList();
public MyTests(DatabaseFixture databaseFixture)
{
Database = databaseFixture;
}
[Theory]
[IntegrationTest]
[MemberData(nameof(MyData))]
public void MyDataIsAccurate(int value1, string value2, string value3)
{
// Assert some stuff about the data...
}
}
You cannot access the fixture from the code that provides the test cases (whether that is a MemberData property or a ClassData implementation or a custom DataAttribute subclass.
Reason
Xunit creates an AppDomain containing all the data for the test cases. It builds up this AppDomain with all of those data at the time of test discovery. That is, the IEnumerable<object[]>s are sitting in memory in the Xunit process after the test assembly is built, and they are sitting there just waiting for the tests to be run. This is what enables different test cases to show up as different tests in test explorer in visual studio. Even if it's a MemberData-based Theory, those separate test cases show up as separate tests, because it's already run that code, and the AppDomain is standing by waiting for the tests to be run. On the other hand, fixtures (whether class fixtures or collection fixtures) are not created until the test RUN has started (you can verify this by setting a breakpoint in the constructor of your fixture and seeing when it is hit). This is because they are meant to hold things like database connections that shouldn't be left alive in memory for long periods of time when they don't need to be. Therefore, you cannot access the fixture at the time the test case data is created, because the fixture has not been created.
If I were to speculate, I would guess that the designers of Xunit did this intentionally and would have made it this way even if the test-discovery-loads-the-test-cases-and-therefore-must-come-first thing was not an issue. The goal of Xunit is not to be a convenient testing tool. It is to promote TDD, and a TDD-based approach would allow anyone to pick up the solution with only their local dev tools and run and pass the same set of tests that everyone else is running, without needing certain records containing test case data to be pre-loaded in a local database.
Note that I'm not trying to say that you shouldn't do what you're trying, only that I think the designers of Xunit would tell you that your test cases and fixtures should populate the database, not the other way around. I think it's at least worth considering whether that approach would work for you.
Workaround #1
Your static database connection may work, but it may have unintended consequences. That is, if the data in your database changes after the test discovery is done (read: after Xunit has built up the test cases) but before the test itself is run, your tests will still be run with the old data. In some cases, even building the project again is not enough--it must be cleaned or rebuilt in order for test discovery to be run again and the test cases be updated.
Furthermore, this would kind of defeat the point of using an Xunit fixture in the first place. When Xunit disposes the fixture, you are left with the choice to either: dispose the static database connection (but then it will be gone when you run the tests again, because Xunit won't necessarily build up a new AppDomain for the next run), or do nothing, in which case it might as well be a static singleton on some service locator class in your test assembly.
Workaround #2
You could parameterize the test with data that allows it to go to the fixture and retrieve the test data. This has the disadvantage that you don't get the separate test cases listed as separate tests in either test explorer or your output as you would hope for with a Theory, but it does load the data at the time of the tests instead of at setup and therefore defeats the "old data" problem as well as the connection lifetime problem.
Summary
I don't think such a thing exists in Xunit. As far as I know, your options are: have the test data populate the database instead of the other way around, or use a never-disposed static singleton database connection, or pull the data in your test itself. None of these are the "clean" solution you were hoping for, but I doubt you'll be able to get much better than one of these.
There is a way of achieving what you want, using delegates. This extremely simple example explains it quite well:
using System;
using System.Collections.Generic;
using Xunit;
namespace YourNamespace
{
public class XUnitDeferredMemberDataFixture
{
private static string testCase1;
private static string testCase2;
public XUnitDeferredMemberDataFixture()
{
// You would populate these from somewhere that's possible only at test-run time, such as a db
testCase1 = "Test case 1";
testCase2 = "Test case 2";
}
public static IEnumerable<object[]> TestCases
{
get
{
// For each test case, return a human-readable string, which is immediately available
// and a delegate that will return the value when the test case is run.
yield return new object[] { "Test case 1", new Func<string>(() => testCase1) };
yield return new object[] { "Test case 2", new Func<string>(() => testCase2) };
}
}
[Theory]
[MemberData(nameof(TestCases))]
public void Can_do_the_expected_thing(
string ignoredTestCaseName, // Not used; useful as this shows up in your test runner as human-readable text
Func<string> testCase) // Your test runner will show this as "Func`1 { Method = System.String.... }"
{
Assert.NotNull(testCase);
// Do the rest of your test with "testCase" string.
}
}
}
In the OP's case, you could access the database in the XUnitDeferredMemberDataFixture constructor.
I have a very simple Code, but what it does it completely weird. It is a simple Cache abstraction and goes like this:
public class CacheAbstraction
{
private MemoryCache _cache;
public CacheAbstraction()
{
_cache = new MemoryCache(new MemoryCacheOptions { });
}
public async Task<T> GetItemAsync<T>(TimeSpan duration, Func<Task<T>> factory,
[CallerMemberName] string identifier = null ) where T : class
{
return await _cache.GetOrCreateAsync<T>(identifier, async x =>
{
x.SetAbsoluteExpiration(DateTime.UtcNow.Add(duration));
T result = null;
result = await factory();
return result;
});
}
}
Now the fun part: I'm passing expiration durations of 1h - 1d
If I run it in a test suite, everything is fine.
If I run it as a .net core app, the expiration is always set to "now" and the item expires on the next cache check. WTF!?
I know it's been two years, but I ran across this same problem (cache items seeming to expire instantly) recently and found a possible cause. Two essentially undocumented features in MemoryCache: linked cache entries and options propagation.
This allows a child cache entry object to passively propagate it's options up to a parent cache entry when the child goes out of scope. This is done via IDisposable, which ICacheEntry implements and is used internally by MemoryCache in extension methods like Set() and GetOrCreate/Async(). What this means is that if you have "nested" cache operations, the inner ones will propagate their cache entry options to the outer ones, including cancellation tokens, expiration callbacks, and expiration times.
In my case, we were using GetOrCreateAsync() and a factory method that made use of a library which did its own caching using the same injected IMemoryCache. For example:
public async Task<Foo> GetFooAsync() {
return await _cache.GetOrCreateAsync("cacheKey", async c => {
c.AbsoluteExpirationRelativeToNow = TimeSpan.FromHours(1);
return await _library.DoSomething();
});
}
The library uses IMemoryCache internally (the same instance, injected via DI) to cache results for a few seconds, essentially doing this:
_cache.Set(queryKey, queryResult, TimeSpan.FromSeconds(5));
Because GetOrCreateAsync() is implemented by creating a CacheEntry inside a using block, the effect is that the 5 second expiration used by the library propagates up to the parent cache entry in GetFooAsync(), resulting in the Foo object always only being cached for 5 seconds instead of 1 hour, effectively expiring it immediately.
DotNet Fiddle showing this behavior: https://dotnetfiddle.net/fo14BT
You can avoid this propagation behavior in a few ways:
(1) Use TryGetValue() and Set() instead of GetOrCreateAsync()
if (_cache.TryGetValue("cacheKey", out Foo result))
return result;
result = await _library.DoSomething();
return _cache.Set("cacheKey", result, TimeSpan.FromHours(1));
(2) Assign the cache entry options after invoking the other code that may also use the cache
return await _cache.GetOrCreateAsync("cacheKey", async c => {
var result = await _library.DoSomething();
// set expiration *after*
c.AbsoluteExpiration = DateTime.Now.AddHours(1);
return result;
});
(and since GetOrCreate/Async() does not prevent reentrancy, the two are effectively the same from a concurrency standpoint).
Warning: Even then it's easy to get wrong. If you try to use AbsoluteExpirationRelativeToNow in option (2) it won't work because setting that property doesn't remove the AbsoluteExpiration value if it exists resulting in both properties having a value in the CacheEntry, and AbsoluteExpiration is honored before the relative.
For the future, Microsoft has added a feature to control this behavior via a new property MemoryCacheOptions.TrackLinkedCacheEntries, but it won't be available until .NET 7. Without this future feature, I haven't been able to think of a way for libraries to prevent propagation, aside from using a different MemoryCache instance.
I have the following DatabaseFixture which has worked well for all tests I have created up to this point. I use this fixture for integration tests so I can make real assertions on database schema structures.
public class DatabaseFixture : IDisposable
{
public IDbConnection Connection => _connection.Value;
private readonly Lazy<IDbConnection> _connection;
public DatabaseFixture()
{
var environment = Environment.GetEnvironmentVariable("ASPNET_ENVIRONMENT") ?? "Development";
var configuration = new ConfigurationBuilder()
.SetBasePath(Directory.GetCurrentDirectory())
.AddJsonFile("AppSettings.json", optional: false, reloadOnChange: true)
.AddJsonFile($"AppSettings.{environment}.json", optional: true, reloadOnChange: true)
.Build();
_connection = new Lazy<IDbConnection>(() =>
{
var connection = new MySqlConnection(configuration["ConnectionStrings:MyDatabase"]);
connection.Open();
return connection;
});
}
public void Dispose()
{
Connection?.Dispose();
}
}
[CollectionDefinition("Database Connection Required")]
public class DatabaseConnectionFixtureCollection : ICollectionFixture<DatabaseFixture>
{
}
The problem I am facing is I now need to invoke a test method like MyDataIsAccurate(...) with each record from a table in the database. xUnit offers the [MemberData] attribute which is exactly what I need but it requires a static enumerable set of data. Does xUnit offer a clean way of sharing my DatabaseFixture connection instance statically or do I just need to suck it up and expose a static variable of the same connection instance?
[Collection("Database Connection Required")]
public class MyTests
{
protected DatabaseFixture Database { get; }
// ERROR: Can't access instance of DatabaseFixture from static context...
public static IEnumerable<object[]> MyData => Database.Connection.Query("SELECT * FROM table).ToList();
public MyTests(DatabaseFixture databaseFixture)
{
Database = databaseFixture;
}
[Theory]
[IntegrationTest]
[MemberData(nameof(MyData))]
public void MyDataIsAccurate(int value1, string value2, string value3)
{
// Assert some stuff about the data...
}
}
You cannot access the fixture from the code that provides the test cases (whether that is a MemberData property or a ClassData implementation or a custom DataAttribute subclass.
Reason
Xunit creates an AppDomain containing all the data for the test cases. It builds up this AppDomain with all of those data at the time of test discovery. That is, the IEnumerable<object[]>s are sitting in memory in the Xunit process after the test assembly is built, and they are sitting there just waiting for the tests to be run. This is what enables different test cases to show up as different tests in test explorer in visual studio. Even if it's a MemberData-based Theory, those separate test cases show up as separate tests, because it's already run that code, and the AppDomain is standing by waiting for the tests to be run. On the other hand, fixtures (whether class fixtures or collection fixtures) are not created until the test RUN has started (you can verify this by setting a breakpoint in the constructor of your fixture and seeing when it is hit). This is because they are meant to hold things like database connections that shouldn't be left alive in memory for long periods of time when they don't need to be. Therefore, you cannot access the fixture at the time the test case data is created, because the fixture has not been created.
If I were to speculate, I would guess that the designers of Xunit did this intentionally and would have made it this way even if the test-discovery-loads-the-test-cases-and-therefore-must-come-first thing was not an issue. The goal of Xunit is not to be a convenient testing tool. It is to promote TDD, and a TDD-based approach would allow anyone to pick up the solution with only their local dev tools and run and pass the same set of tests that everyone else is running, without needing certain records containing test case data to be pre-loaded in a local database.
Note that I'm not trying to say that you shouldn't do what you're trying, only that I think the designers of Xunit would tell you that your test cases and fixtures should populate the database, not the other way around. I think it's at least worth considering whether that approach would work for you.
Workaround #1
Your static database connection may work, but it may have unintended consequences. That is, if the data in your database changes after the test discovery is done (read: after Xunit has built up the test cases) but before the test itself is run, your tests will still be run with the old data. In some cases, even building the project again is not enough--it must be cleaned or rebuilt in order for test discovery to be run again and the test cases be updated.
Furthermore, this would kind of defeat the point of using an Xunit fixture in the first place. When Xunit disposes the fixture, you are left with the choice to either: dispose the static database connection (but then it will be gone when you run the tests again, because Xunit won't necessarily build up a new AppDomain for the next run), or do nothing, in which case it might as well be a static singleton on some service locator class in your test assembly.
Workaround #2
You could parameterize the test with data that allows it to go to the fixture and retrieve the test data. This has the disadvantage that you don't get the separate test cases listed as separate tests in either test explorer or your output as you would hope for with a Theory, but it does load the data at the time of the tests instead of at setup and therefore defeats the "old data" problem as well as the connection lifetime problem.
Summary
I don't think such a thing exists in Xunit. As far as I know, your options are: have the test data populate the database instead of the other way around, or use a never-disposed static singleton database connection, or pull the data in your test itself. None of these are the "clean" solution you were hoping for, but I doubt you'll be able to get much better than one of these.
There is a way of achieving what you want, using delegates. This extremely simple example explains it quite well:
using System;
using System.Collections.Generic;
using Xunit;
namespace YourNamespace
{
public class XUnitDeferredMemberDataFixture
{
private static string testCase1;
private static string testCase2;
public XUnitDeferredMemberDataFixture()
{
// You would populate these from somewhere that's possible only at test-run time, such as a db
testCase1 = "Test case 1";
testCase2 = "Test case 2";
}
public static IEnumerable<object[]> TestCases
{
get
{
// For each test case, return a human-readable string, which is immediately available
// and a delegate that will return the value when the test case is run.
yield return new object[] { "Test case 1", new Func<string>(() => testCase1) };
yield return new object[] { "Test case 2", new Func<string>(() => testCase2) };
}
}
[Theory]
[MemberData(nameof(TestCases))]
public void Can_do_the_expected_thing(
string ignoredTestCaseName, // Not used; useful as this shows up in your test runner as human-readable text
Func<string> testCase) // Your test runner will show this as "Func`1 { Method = System.String.... }"
{
Assert.NotNull(testCase);
// Do the rest of your test with "testCase" string.
}
}
}
In the OP's case, you could access the database in the XUnitDeferredMemberDataFixture constructor.
I'm having a bit of trouble with the time it takes EF to pull some entities. The entity in question has a boatload of props that live in 1 table, but it also has a handful of ICollection's that relate to other tables. I've abandoned the idea of loading the entire object graph as it's way too much data and instead will have my Silverlight client send out a new request to my WCF service as details are needed.
After slimming down to 1 table's worth of stuff, it's taking roughly 8 seconds to pull the data, then another 1 second to .ToList() it up (I expect this to be < 1 second). I'm using the stopwatch class to take measurements. When I run the SQL query in SQL management studio, it takes only a fraction of a second so I'm pretty sure the SQL statement itself isn't the problem.
Here is how I am trying to query my data:
public List<ComputerEntity> FindClientHardware(string client)
{
long time1 = 0;
long time2 = 0;
var stopwatch = System.Diagnostics.Stopwatch.StartNew();
// query construction always takes about 8 seconds, give or a take a few ms.
var entities =
DbSet.Where(x => x.CompanyEntity.Name == client); // .AsNoTracking() has no impact on performance
//.Include(x => x.CompanyEntity)
//.Include(x => x.NetworkAdapterEntities) // <-- using these 4 includes has no impact on SQL performance, but faster to make lists without these
//.Include(x => x.PrinterEntities) // I've also abandoned the idea of using these as I don't want the entire object graph (although it would be nice)
//.Include(x => x.WSUSSoftwareEntities)
//var entities = Find(x => x.CompanyEntity.Name == client); // <-- another test, no impact on performance, same execution time
stopwatch.Stop();
time1 = stopwatch.ElapsedMilliseconds;
stopwatch.Restart();
var listify = entities.ToList(); // 1 second with the 1 table, over 5 seconds if I use all the includes.
stopwatch.Stop();
time2 = stopwatch.ElapsedMilliseconds;
var showmethesql = entities.ToString();
return listify;
}
I'm assuming that using the .Include means eager loading, although it isn't relevant in my current case as I just want the 1 table's worth of stuff. The SQL generated by this statement (which executes super fast in SSMS) is:
SELECT
[Extent1].[AssetID] AS [AssetID],
[Extent1].[ClientID] AS [ClientID],
[Extent1].[Hostname] AS [Hostname],
[Extent1].[ServiceTag] AS [ServiceTag],
[Extent1].[Manufacturer] AS [Manufacturer],
[Extent1].[Model] AS [Model],
[Extent1].[OperatingSystem] AS [OperatingSystem],
[Extent1].[OperatingSystemBits] AS [OperatingSystemBits],
[Extent1].[OperatingSystemServicePack] AS [OperatingSystemServicePack],
[Extent1].[CurrentUser] AS [CurrentUser],
[Extent1].[DomainRole] AS [DomainRole],
[Extent1].[Processor] AS [Processor],
[Extent1].[Memory] AS [Memory],
[Extent1].[Video] AS [Video],
[Extent1].[IsLaptop] AS [IsLaptop],
[Extent1].[SubnetMask] AS [SubnetMask],
[Extent1].[WINSserver] AS [WINSserver],
[Extent1].[MACaddress] AS [MACaddress],
[Extent1].[DNSservers] AS [DNSservers],
[Extent1].[FirstSeen] AS [FirstSeen],
[Extent1].[IPv4] AS [IPv4],
[Extent1].[IPv6] AS [IPv6],
[Extent1].[PrimaryUser] AS [PrimaryUser],
[Extent1].[Domain] AS [Domain],
[Extent1].[CheckinTime] AS [CheckinTime],
[Extent1].[ActiveComputer] AS [ActiveComputer],
[Extent1].[NetworkAdapterDescription] AS [NetworkAdapterDescription],
[Extent1].[DHCP] AS [DHCP]
FROM
[dbo].[Inventory_Base] AS [Extent1]
INNER JOIN [dbo].[Entity_Company] AS [Extent2]
ON [Extent1].[ClientID] = [Extent2].[ClientID]
WHERE
[Extent2].[CompanyName] = #p__linq__0
Which is basically a select all columns in this table, join a second table that has a company name, and filter with a where clause of companyname == input value to the method. The particular company I'm pulling only returns 75 records.
Disabling object tracking with .AsNoTracking() has zero impact on execution time.
I also gave the Find method a go, and it had the exact same execution time. The next thing I tried was to pregenerate the views in case the issue was there. I am using code first, so I used the EF power tools to do this.
This long period of time to run this query causes too long of a delay for my users. When I hand write the SQL code and don't touch EF, it is super quick. Any ideas as to what I'm missing?
Also, maybe related or not, but since I'm doing this in WCF which is stateless I assume absolutely nothing gets cached? The way I think about it is that every new call is a firing up this WCF service library for the first time, therefore there is no pre-existing cache. Is this an accurate assumption?
Update 1
So I ran this query twice within the same unit test to check out the cold/warm query thing. The first query is horrible as expected, but the 2nd one is lightning fast clocking in at 350ms for the whole thing. Since WCF is stateless, is every single call to my WCF service going to be treated as this first ugly-slow query? Still need to figure out how to get this first query to not suck.
Update 2
You know those pre-generated views I mentioned earlier? Well... I don't think they are being hit. I put a few breakpoints in the autogenerated-by-EF-powertools ReportingDbContext.Views.cs file, and they never get hit. This coupled with the cold/warm query performance I see, this sounds like it could be meaningful. Is there a particular way I need to pregenerate views with the EF power tools in a code first environment?
Got it! The core problem was the whole cold query thing. How to get around this cold query issue? By making a query. This will "warm up" EntityFramework so that subsequent query compilation is much faster. My pre-generated views did nothing to help with the query I was compiling in this question, but they do seem to work if I want to dump an entire table to an array (a bad thing). Since I am using WCF which is stateless, will I have to "warm up" EF for every single call? Nope! Since EF lives in the app domain and not the context, I just to need to do my warm up on the init of the service. For dev purposes I self host, but in production it lives in IIS.
To do the query warm up, I made a service behavior that takes care of this for me. Create your behavior class as such:
using System;
using System.Collections.ObjectModel;
using System.ServiceModel;
using System.ServiceModel.Channels; // for those without resharper, here are the "usings"
using System.ServiceModel.Description;
public class InitializationBehavior : Attribute, IServiceBehavior
{
public InitializationBehavior()
{
}
public void Validate(ServiceDescription serviceDescription, ServiceHostBase serviceHostBase)
{
}
public void AddBindingParameters(ServiceDescription serviceDescription, ServiceHostBase serviceHostBase, Collection<ServiceEndpoint> endpoints,
BindingParameterCollection bindingParameters)
{
Bootstrapper.WarmUpEF();
}
public void ApplyDispatchBehavior(ServiceDescription serviceDescription, ServiceHostBase serviceHostBase)
{
}
}
I then used this to do the warmup:
public static class Bootstrapper
{
public static int initialized = 0;
public static void WarmUpEF()
{
using (var context = new ReportingDbContext())
{
context.Database.Initialize(false);
}
initialized = 9999; // I'll explain this
}
}
This SO question helped with the warmup code:
How do I initialize my Entity Framework queries to speed them up?
You then slap this behavior on your WCF service like so:
[InitializationBehavior]
public class InventoryService : IInventoryService
{
// implement your service
}
I launched my services project in debug mode which in turn fired up the initialization behavior. After spamming the method that makes the query referenced in my question, my breakpoint in the behavior wasn't being hit (other than being hit when I first self hosted it). I verified that it was it by checking out the static initialized variable. I then published this bad boy into IIS with my verification int and it had the exact same behavior.
So, in short, if you are using Entity Framework 5 with a WCF service and don't want a crappy first query, warm it up with a service behavior. There are probably other/better ways of doing this, but this way works too!
edit:
If you are using NUnit and want to warm up EF for your unit tests, setup your test as such:
[TestFixture]
public class InventoryTests
{
[SetUp]
public void Init()
{
// warm up EF.
using (var context = new ReportingDbContext())
{
context.Database.Initialize(false);
}
// init other stuff
}
// tests go here
}
I am new to NHibernate and even newer to MOQ (or other similar frameworks). After searching online day and night (google + stackoverflow + others), I am turning here for help.
The scenario is (should be) simple. I am trying to unit test a call on a C# WCF service that uses NHibernate as the ORM layer. The method, after doing some initial work, finds a database to connect to, and then calls on the SessionProvider (a manager of session factories) to return a nhibernate session for a sharded DB. I am then trying to use ISession.Get<>() to retrieve an object from the database aand then do some work. The problem is that the GUID (the key for the entry that I am looking up in the db) is generated at the begining of the call and I have no way of knowing what it might be beforehand outside the scope of the WCF call. Hence, I cannot use sqllite or other techniques to pre-populate the necessary data to control the test. What I was hoping for was that I can somehow mock (inject a fake layer to?) the call to Session.Get to return an invalid object which should cause the WCF call to throw.
Here's the test code snippet:
var testRequest = ... (request DTO)
var dummyBadObject = ... (entity in DB)
var mock = new Mock<ISession>(MockBehavior.Strict);
mock.Setup(m => m.Get<SampleObject>(It.IsAny<Guid>())).Returns(dummyBadObject);
var exception = Assert.Throws<FaultException>(() => applicationService.SomeMethod(testRequest));
Assert.AreEqual(exception.Code.ToString(), SystemErrorFault.Code.ToString());
When I run this test, instead of interacting with the mock ISession object, the app service code calls the Get on the actual ISession object from the session factory, connects to the database and gets the right object. Seems like I am missing something very basic about mocks or injection. Any help will be appreciated.
Thanks,
Shawn
Based on our comments, the problem is that mocks are completely different from how you thought of them.
They don't magically intercept creations of classes derived from an interface. They are just dynamic implementations of it.
Creating a Mock<ISession> is not much different from creating a class that implements ISession. You still have to inject it in the services that depend on it.
You'll probably have to review your whole stack, as the capability to do this depends on a good decoupled design.
Suggested read: Inversion of control
I re-designed the components in my application to have a ServiceContext object which in turns holds all the other (what used to be static) components that the application uses. In this case, this would be the session provider (or the ISessionFactory cache), and similarly a WCF channel factory cache. The difference is that the ServiceContext provides methods to override the default instances of the different components allowing me to replace them with mock ones for testing and restoring the original ones when testing is done. This has allowed me to create a test where I mock all the way from the session cache to the ISession.Get/Save/Load etc.
var mockDatabaseSessionFactory = new Mock<DatabaseSessionManager>(MockBehavior.Strict);
var mockSession = new Mock<ISession>(MockBehavior.Strict);
var mockTransaction = new Mock<ITransaction>(MockBehavior.Strict);
mockDatabaseSessionFactory.Setup(x => x.GetIndividualMapDbSession()).Returns(mockSession.Object);
mockDatabaseSessionFactory.Setup(x => x.GetIndividualDbSession(It.IsAny<UInt32>())).Returns(mockSession.Object);
mockDatabaseSessionFactory.Setup(x => x.Dispose());
mockSession.Setup(x => x.BeginTransaction()).Returns(mockTransaction.Object);
mockSession.Setup(x => x.Dispose());
mockTransaction.Setup(x => x.Commit());
mockTransaction.Setup(x => x.Dispose());
// Setups to allow for the map insertion/deletion to pass
mockSession.Setup(x => x.Get<IndividualMap>(It.IsAny<string>())).Returns((IndividualMap)null);
mockSession.Setup(x => x.Load<IndividualMap>(It.IsAny<string>())).Returns((IndividualMap)null);
mockSession.Setup(x => x.Save(It.IsAny<IndividualMap>())).Returns(new object());
mockSession.Setup(x => x.Delete(It.IsAny<IndividualMap>()));
// Our test condition for this test: throw on attempt to save individual
mockSession.Setup(x => x.Save(It.IsAny<Individual>()))
.Throws(new FaultException(ForcedTestFault.Reason, ForcedTestFault.Code));
// Test it - but be sure to back up the previous database session factory
var originalDbSessionFactory = ServiceContext.DatabaseSessionManager;
ServiceContext.OverrideDatabaseSessionManager(mockDatabaseSessionFactory.Object);
try
{
var exception = Assert.Throws<FaultException>(() => applicationService.AddIndividual(addIndividualRequest));
Assert.IsTrue(ForcedTestFault.Code.Name.Equals(exception.Code.Name));
}
catch (Exception)
{
// Restore the original database session factory before rethrowing
ServiceContext.OverrideDatabaseSessionManager(originalDbSessionFactory);
throw;
}
ServiceContext.OverrideDatabaseSessionManager(originalDbSessionFactory);
ServiceContext.CommunicationManager.CloseChannel(applicationService);
Luckily the code design wasn't too bad o_O :) so i re-factored this bit easily and now code coverage is at 100! Thanks Diego for nudging me in the right direction.