I'm using Entity Framework Core 2.0-preview1 with InMemory 2.0-preview1.
Each unit test class inherits a disposable class that creates a new in-memory database that its parents can use.
public Constructor()
{
var services = new ServiceCollection();
services.AddEntityFrameworkInMemoryDatabase()
.AddDbContext<DBContext>(o => o.UseInMemoryDatabase("Test"));
var serviceProvider = services.BuildServiceProvider();
Context = serviceProvider.GetRequiredService<DBContext>();
}
The issue with giving the database a name is that it cannot be shared across multiple tests, and thus each test creates a new context resulting in each unit test lasting a few seconds, which is unacceptable for my build server. I can't find much documentation on why this was changed in 2.0 or how to get past this.
I've tried using the new .UseTransientInMemoryDatabase, but this appears to change nothing.
I used a xUnit fixture to provide all my test instances with the save database context. That way I avoid the context creation overhead on each test which speeds the build server up by a large margin.
Related
I'm currently writing xUnit test for a .net core application. This is the way I am setting up my DbContext in the Startup:
services.AddDbContext<ApplicationDbContext>(options =>
options.UseSqlServer(
Configuration.GetConnectionString("DefaultConnection")));
Now I want the xUnit tests to be executed using SQLite, which would require me to be able to detect if the application is being executed normally or if the tests are being executed in my Startup Class. To then set up the DbContext depending on that.
Is there a way to do so? I have been googling a lot, but not been able to find a good solution.
Trying to understand what you want to do. I am not sure I got it but I can give some tips.
If you want to test the database in unit tests with EF you can do the following.
Create a context options builder object, using an in memory database
var options = new DbContextOptionsBuilder<YourContext>()
.UseInMemoryDatabase(databaseName: "TestingDb")
.Options;
Create a context using this options object
var context = new YourContext(options);
And create a repository using the above context.
_repository= new YourRepository(context);
Now use this new repository and you will have a proper in memory database for your tests. Using DbContextOptionsBuilder you can change the target database as you wish. I hope my answer helps you.
Wanted Result: Run integration tests in parallel with entity framework configured to use some kind of database that enforces constraints.
Current Situation: I have a project with some integration tests, and by that I mean test set up with WebApplicationFactory<Startup>. Currently I use it with EF Core setup to use In Memory Database
protected override void ConfigureWebHost(IWebHostBuilder builder)
{
ConfigureWebHostBuilder(builder);
builder.ConfigureTestServices(services =>
{
var serviceProvider = new ServiceCollection()
.AddEntityFrameworkInMemoryDatabase()
.BuildServiceProvider();
services.AddDbContext<AppDbContext>(options =>
{
options.UseInMemoryDatabase("testDb");
options.UseInternalServiceProvider(serviceProvider);
options.EnableDetailedErrors();
options.EnableSensitiveDataLogging();
});
});
}
While this works, the problem is that it doesn't enforce SQL queries and I already ran in multiple cases where integration tests show the feature works, but when trying to reproduce it with actual project running and having EF connected to SQL Server database, the feature fails because of some failed database constraint.
Ideas:
At first I thought of using SQLite in memory database. Didn't work, I think it's some concurrency issue, because at the start of every test case, I wipe the database with .EnsureDeleted() .EnsureCreated() .Migrate()
Then SQLite with files with random names. Didn't work, because I couldn't manage to wipe out internal DbContext Cache. I was sure that file was recreated and clean, but DbContext simply had cached entities.
Finally after some migrations I released that I wouldn't probably be able to use SQLite at all, because it doesn't support a lot of migration types (Removing FK, altering a column, etc.)/
I'm trying to create some unit tests for my project, after much digging around I found Effort, the idea is great, it mocks the database instead of the dealing with faking the DBContext which by the way is really hard to get it right when using a complex schema.
However I'm trying to get the Email of a user after I specifically added it to the in-memory database create by Effort, here is the code
MyContext contextx = new MyContext(Effort.DbConnectionFactory.CreateTransient());
var client = new Client
{
ClientId = 2,
PersonId = 3,
Person = new Person
{
PersonId = 3,
EMail = "xxxxx#gmail.com"
}
};
contextx.Client.Add(client); //<-- client got added, I checked it and is there
var email = contextx.Client.Select(c => c.Person.EMail).FirstOrDefault();
In the last line above I can't make it to return the email xxxx#gmail.com instead it always returns null.
Any ideas?
Answering Your Direct Question
For the specific question you asked, I would suggest two things:
Take a look at contextx.Client.ToArray() and see how many members you really have in that collection. It could be that the Client collection is actually empty, in which case you'll indeed get null. Or, it could be that the first element in the Client collection has a null value for EMail.
How does the behavior change if you call contextx.SaveChanges() before querying the Client collection on the DbContext? I'm curious to see if calling SaveChanges will cause the newly inserted value to exist in the collection. This really shouldn't be required, but there might be some strange interaction between Effort and the DbContext.
EDIT: SaveChanges() turns out to be the answer.
General Testing Suggestions
Since you tabbed this question with the "unit-testing" tag, I'll offer some general unit testing advice based on my ten years spent as a unit testing practitioner and coach. Unit testing is about testing various small parts of your application in isolation. Typically this means that unit tests only interact with a few classes at once. This also means that unit tests should not depend on external libraries or dependencies (such as the database). Conversely, an integration test exercises more parts of the system at once and may have external dependencies on things like databases.
While this may seem like a quibble over terminology, the terms are important for conveying the actual intent of your tests to other members of your team.
In this case, either you are really wanting to unit test some piece of functionality that happens to depend on DbContext, or you are attempting to test your data access layer. If you're trying to write an isolated unit test of something that depends on the DbContext directly, then you need to break the dependency on the DbContext. I'll explain this below in Breaking the Dependency on DbContext below. Otherwise, you're really trying to integration test your DbContext including how your entities are mapped. In this case, I've always found it best to isolate these tests and use a real (local) database. You probably want to use a locally installed database of the same variety you're using in production. Often, SqlExpress works just fine. Point your tests at an instance of the database that the tests can completely trash. Let your tests remove any existing data before running each test. Then, they can setup whatever data they need without concern that existing data will conflict.
Breaking the Dependency on DbContext
So then, how do you write good unit tests when your business logic depends on accessing DbContext? You don't.
In my applications that use Entity Framework for data persistence, I make sure access to the DbContext is contained within a separate data access project. Typically, I will create classes that implement the Repository pattern and those classes are allowed to take a dependency on DbContext. So, in this case, I would create a ClientRepository that implements an IClientRepository interface. The interface would look something like this:
public interface IClientRepository {
Client GetClientByEMail(string email);
}
Then, any classes that need access to the method can be unit tested using a basic stub / mock / whatever. Nothing has to worry about mocking out DbContext. Your data access layer is contained, and you can test it thoroughly using a real database. For some suggestions on how to test your data access layer, see above.
As an added benefit, the implementation of this interface defines what it means to find a Client by email address in a single, unified place. The IClientRepository interface allows you to quickly answer the question, "How do we query for Client entities in our system?"
Taking a dependency on DbContext is roughly the same scale of a testing problem as allowing domain classes to take a dependency on the connection string and having ADO.Net code everywhere. It means that you have to create a real data store (even with a fake db) with real data in it. But, if you contain your access to the DbContext within a specific data access assembly, you'll find that your unit tests are much easier to write.
As far as project organization, I typically only allow my data access project to take a reference to Entity Framework. I'll have a separate Core project in which I define the entities. I'll also define the data access interfaces in the Core project. Then, the concrete interface implementations get put into the data access project. Most of the projects in your solution can then simply take a dependency on the Core project, and only the top level executable or web project really needs to depend on the data access project.
I'm using EF6 and I'm now setting up some tests for my aggregates. I've decided to use Effort.EF6 because I'd like to have those tests run without having to install an entire database engine.
My DbContext uses migrations and a seeding method that inserts some data. Can Effort.EF6 make use of that or should I use Effort's methods of seeding data ?
The migrations take place automatically. I call the normal context seed method when I need populated data. Note that depending on the scope of your context (per test, or per test assembly) you may be running lots and lots of queries to do your seeding. That has both performance implications, and debugging issues, since any seeding bugs will start showing up as bugs in your tests, and any logging that happens during seeding will log as part of each test.
var connection = Effort.DbConnectionFactory.CreateTransient();
var context = new DbContext(connection);
context.Seed();
In my application I have multiple small entity framework dbcontexts which share the same database, for example:
public class Context1 : DbContext {
public Context1()
: base("DemoDb") {
}
}
public class Context2 : DbContext {
public Context2()
: base("DemoDb") {
}
}
All database updates are done via scripts and do not rely on migrations (nor will they going forward). The question is - how would you do integration testing against these contexts?
I believe there are three options here (there may be more I just don't know them)
Option 1 - Super context - a context which contains all models and configurations required for setting up the database:
public class SuperContext : DbContext
{
public SuperContext()
: base("DemoDb") {
}
}
In this option the test database would be setup against the super context and all subsequent testing would be done through the smaller contexts.
The reason I am not keen on this option is that I will be duplicating all the configurations and entity models that i have already built.
Option 2 - create a custom initialiser for integration tests that will run all the appropriate db initialisation scripts:
public class IntegrationTestInitializer : IDatabaseInitializer<DbContext> {
public void InitializeDatabase(DbContext context) {
/* run scripts to set up database here */
}
}
This option allows for testing against the true database structure but will also require updating everytime new db scripts are added
Option 3 - just test the individual contexts:
In this option one would just let EF create the test database based upon the context and all tests would operate within there own "sandbox".
The reason that I don't like this is that it doesn't feel like you would be testing against a true representation of the database.
I'm currently swaying towards options 2. What do you all think? Is there a better method out there?
I'm using integration testing a lot, because I still think it's the most reliable way of testing when data-dependent processes are involved. I also have a couple of different contexts, and DDL scripts for database upgrades, so our situations are very similar.
What I ended up with was Option 4: maintaining unit test database content through the regular user interface. Of course most integration tests temporarily modify the database content, as part of the "act" phase of the test (more on this "temporary" later), but the content is not set up when the test session starts.
Here's why.
At some stage we also generated database content at the start of the test session, either by code or by deserializing XML files. (We didn't have EF yet, but otherwise we would probably have had some Seed method in a database initializer). Gradually I started to feel misgivings with this approach. It was a hell of a job to maintain the code/XML when the data model or the business logic changed, esp. when new use cases had to be devised. Sometimes I allowed myself a minor corruption of these test data, knowing that it would not affect the tests.
Also, the data had to make sense, as in they had to be as valid and coherent as data from the real application. One way to ensure that is to generate the data by the application itself, or else inevitably you will somehow duplicate business logic in the seed method. Mocking real-world data is actually very hard. That's the most important thing I found out. Testing data constellations that don't represent real use cases isn't only a wast of time, it's false security.
So I found myself creating the test data through the application's front end and then painstakingly serializing this content into XML or writing code that would generate exactly the same. Until one day it occurred to me that I had the data readily available in this database, so why not use it directly?
Now maybe you ask How to make tests independent?
Integration tests, just as unit tests, should be executable in isolation. They should not depend on other tests, nor should they be affected by them. I assume that the background of your question is that you create and seed a database for each integration test. This is one way to achieve independent tests.
But what if there is only one database, and no seed scripts? You could restore a backup for each test. We chose a different approach. Each integration test runs within a TransactionScope that's never committed. It is very easy to achieve this. Each test fixture inherits from a base class that has these methods (NUnit):
[SetUp]
public void InitTestEnvironment()
{
SetupTeardown.PerTestSetup();
}
[TearDown]
public void CleanTestEnvironment()
{
SetupTeardown.PerTestTearDown();
}
and in SetupTeardown:
public static void PerTestSetup()
{
_tranactionScope = new TransactionScope();
}
public static void PerTestTearDown()
{
if (_tranactionScope != null)
{
_tranactionScope.Dispose(); // Rollback any changes made in a test.
_tranactionScope = null;
}
}
where _tranactionScope is a static member variable.
Option 2, or any variation thereof that runs the actual DB update scripts would be the best. Otherwise than this you are not necessarily integration testing against the same database you have in production (with respect to the schema, at least).
In order to address your concern about requiring updating every time new DB scripts are added, if you were to keep all the scripts in a single folder, perhaps within the project with a build action of "copy if newer", you could programmatically read each file and execute the script therein. As long as the place you're reading the files from is your canonical repository for the update scripts, you will never need to go in and make any further changes.