I am trying to learn how to mock my generic repository so i can unit test all my services.
Im using NHibernate Fluent to handle data access and a Ninject for dependency (I'm not interested in testing that)
My repository interface looks like:
public interface IRepository<TEntity> where TEntity : class
{
IQueryable<TEntity> GetAll();
TEntity Get(int key);
void Insert(TEntity entity);
void Update(TEntity entity);
void Delete(int id);
}
And the actual repository looks like:
public class GenerRepository<TEntity> : IRepository<TEntity>where TEntity : Entity
{
protected ISession Session{get { return NHibernateHelper.OpenSession(); }}
public IQueryable<TEntity> GetAll(){return Session.Query<TEntity>();}
public TEntity Get(int key){return Session.Get<TEntity>(key);}
public void Insert(TEntity entity){Session.Save(entity);}
public void Update(TEntity entity){Session.Update(entity);}
public void Delete(int id){Session.Delete(Session.Load<TEntity>(id));}
}
All my services do the following take the created repository in and use it.
I've read so many articles on how to do this but none are simple or well explained. So any advice between creating a test generic repository or even mocking it. I would also be interested in creating a in memory database but how do i set the configuration up for fluent nhibernate in my test project without editing code in my real project?
Is it possible just to make the generic repository hit a list of Tentity rather than the database or in memory database.
Thanks for reading and look forward to the advice.
My answer should/could be a comment, maybe. Because I would like to tell you: do not do it. Do not waste your time to create a fake of the data to be returned from persistence. And do not invest your time to: take the data from a client and put them into some virtual DB in memory.
You need to be sure, that your services (consuming repository) can really serialize/render the real data. And deserialize/persist the changed. And that would really require a real data.
Rather spend some time to create scripts, which will populate the test data. The data which you can expect in your tests: when doing Business validation, Service data serialization...
Also take a look here: Ayende: NHibernate Unit Testing. An extract:
When using NHibernate we generally want to test only three things,
that properties are persisted, that cascade works as expected and that
queries return the correct result. In order to do all of those, we
generally have to talk to a real database, trying to fake any of those
at this level is futile and going to be very complicated.
A note: some time ago, we used to wrap all the tests in Transaction Begin() and Rollback(). Which was looking good. But we realized, that lot of stuff - because of missing Flush() call - was not tested all the way down (e.g. setting not-null).
I have to agree with Radim, that unit testing nhibernate code by mocking the nhibernate functionality in most cases in not what you want to do.
Unless you want to test complex business logic which is based on data you retrieve via nhibernate, then this is perfectly fine.
But to test if your mappings, data retrieval and persistence works fine, you have to test against a real database.
If you target MSSQL Server, I would not use another type of database. Instead there is SQL Express which has all features of the real server.
MSSQL Express can optionally be installed with local DB. This will allow you to load mdf files via connection string which will more or less instantiate an instance of MSSQL Server...
I used that for integration testing and it works really nice.
Create a data base file in your unit test project
Depending on your model (code first/db first) let nhibernate create the scheme, otherwise simple populate the scheme into that database file
Add the file to the deployment items of your test settings so that the file gets copied to the test target directory
Generate a connection string which uses the copied database file.
Example connection string: Data Source=(LocalDB)\v11.0;AttachDbFileName=[whateverthepathis]\DatabaseFileName.mdf;InitialCatalog=DatabaseName;Integrated Security=True;MultipleActiveResultSets=True
Run your tests
This way your tests will run with an empty database every time and you will have reproduceable integration tests without the need of a real server where you would have to create a DB or reset it everytime...
There are couple of ways to achieve this,
Use a real database for testing using scripts to setup and revert the database, but with this approach it would take time and effort to create and maintain these scripts when there are changes to the database
Use a real database, and use transaction scope for testing (starting the transaction persist, and do the test and once all is done only rolling back the transaction), this is a really good approach and I use this for a large scale project. However one problem with this is it takes a lot of time to run tests (I have around 3500 tests and it takes total of 40 minutes to run them all)
Use a fake repositories (having an internal list of entities) for business logic test and use actually repositories to verify the mappings. This approach would require additional effort to create and maintain fake repositories. The same tests executed on actual repositories can be executed on fake repositories to verify fakes are working. With this approach test execution would be faster.
Related
I'm trying to create some unit tests for my project, after much digging around I found Effort, the idea is great, it mocks the database instead of the dealing with faking the DBContext which by the way is really hard to get it right when using a complex schema.
However I'm trying to get the Email of a user after I specifically added it to the in-memory database create by Effort, here is the code
MyContext contextx = new MyContext(Effort.DbConnectionFactory.CreateTransient());
var client = new Client
{
ClientId = 2,
PersonId = 3,
Person = new Person
{
PersonId = 3,
EMail = "xxxxx#gmail.com"
}
};
contextx.Client.Add(client); //<-- client got added, I checked it and is there
var email = contextx.Client.Select(c => c.Person.EMail).FirstOrDefault();
In the last line above I can't make it to return the email xxxx#gmail.com instead it always returns null.
Any ideas?
Answering Your Direct Question
For the specific question you asked, I would suggest two things:
Take a look at contextx.Client.ToArray() and see how many members you really have in that collection. It could be that the Client collection is actually empty, in which case you'll indeed get null. Or, it could be that the first element in the Client collection has a null value for EMail.
How does the behavior change if you call contextx.SaveChanges() before querying the Client collection on the DbContext? I'm curious to see if calling SaveChanges will cause the newly inserted value to exist in the collection. This really shouldn't be required, but there might be some strange interaction between Effort and the DbContext.
EDIT: SaveChanges() turns out to be the answer.
General Testing Suggestions
Since you tabbed this question with the "unit-testing" tag, I'll offer some general unit testing advice based on my ten years spent as a unit testing practitioner and coach. Unit testing is about testing various small parts of your application in isolation. Typically this means that unit tests only interact with a few classes at once. This also means that unit tests should not depend on external libraries or dependencies (such as the database). Conversely, an integration test exercises more parts of the system at once and may have external dependencies on things like databases.
While this may seem like a quibble over terminology, the terms are important for conveying the actual intent of your tests to other members of your team.
In this case, either you are really wanting to unit test some piece of functionality that happens to depend on DbContext, or you are attempting to test your data access layer. If you're trying to write an isolated unit test of something that depends on the DbContext directly, then you need to break the dependency on the DbContext. I'll explain this below in Breaking the Dependency on DbContext below. Otherwise, you're really trying to integration test your DbContext including how your entities are mapped. In this case, I've always found it best to isolate these tests and use a real (local) database. You probably want to use a locally installed database of the same variety you're using in production. Often, SqlExpress works just fine. Point your tests at an instance of the database that the tests can completely trash. Let your tests remove any existing data before running each test. Then, they can setup whatever data they need without concern that existing data will conflict.
Breaking the Dependency on DbContext
So then, how do you write good unit tests when your business logic depends on accessing DbContext? You don't.
In my applications that use Entity Framework for data persistence, I make sure access to the DbContext is contained within a separate data access project. Typically, I will create classes that implement the Repository pattern and those classes are allowed to take a dependency on DbContext. So, in this case, I would create a ClientRepository that implements an IClientRepository interface. The interface would look something like this:
public interface IClientRepository {
Client GetClientByEMail(string email);
}
Then, any classes that need access to the method can be unit tested using a basic stub / mock / whatever. Nothing has to worry about mocking out DbContext. Your data access layer is contained, and you can test it thoroughly using a real database. For some suggestions on how to test your data access layer, see above.
As an added benefit, the implementation of this interface defines what it means to find a Client by email address in a single, unified place. The IClientRepository interface allows you to quickly answer the question, "How do we query for Client entities in our system?"
Taking a dependency on DbContext is roughly the same scale of a testing problem as allowing domain classes to take a dependency on the connection string and having ADO.Net code everywhere. It means that you have to create a real data store (even with a fake db) with real data in it. But, if you contain your access to the DbContext within a specific data access assembly, you'll find that your unit tests are much easier to write.
As far as project organization, I typically only allow my data access project to take a reference to Entity Framework. I'll have a separate Core project in which I define the entities. I'll also define the data access interfaces in the Core project. Then, the concrete interface implementations get put into the data access project. Most of the projects in your solution can then simply take a dependency on the Core project, and only the top level executable or web project really needs to depend on the data access project.
I'm working on a fairly complex multi-tiered application and I'd like to mock the data source for one of the layers as it's very difficult for it to get to the database much of the time. (Some of it doesn't even exist yet.)
What I'd like to be able to do is set a flag in one of the web services to have it use the mocked data source instead of the database connection. I'm just going to put data in xml files. I've successfully used moq in unit tests but it seems I can't make that mental leap to where I can replace the injected database with the mock at run time.
The Stack:
VS 2013
.Net 4.5.1
Ninject
Entity Framework 5?
SQL Server 2012
Several attached databases that are called via stored procedures in SQL Server
Moq 2.x
WCF
Web API 2
Rather than set a flag, why not use and interface and pass that in? E.g. IDataSource.
Your web service, for example, takes an IDataSource as part of its construction.
Then, your moq can implement the interface and you can pass that in, rather than a real implementation of IDataSource. Similarly, your real database would implement the interface, too...
public class MoqDataSource : IDataSource
{
...
}
public class RealDatabase : IDataSource
{
...
}
As for replacing the real data source, at run time, you could use some kind of factory class that returns an IDataSource, and then use any number of methods to decide what the factory returns.
E.g. the factory reads some config file, and depending on what you've set there, it either returns a real data source, or the moq...
public class DataSourceFactory
{
public static IDataSource CreateDataSource()
{
if (/* are we using real data source */)
{
return new RealDatabase();
}
else
{
return new MoqDataSource();
}
}
}
It doesn't matter whether you call it a factory or something else... it's just one way of encapsulating the creation of an IDataSource. Only the factory class needs to be concerned with what type of IDataSource you want to create, the rest of the application doesn't have to worry about it.
IMHO, the easiest way might be to use a file that you can set the dependency information into. This way, you can switch between the Moq and the actual data source with a simple xml file change (could even drive that using different build targets, etc. but that's going beyond the scope of your original question).
In order to integrate an xml file for Ninject to consume, you will need an extension:
https://github.com/ninject/ninject.extensions.xml
HTH...
I am trying to write a unit test for a method that saves multiple records into a database. The method is passed a collection argument. The method loops through the collection. The userid associated with the object is set and the record is updated in the the database. This is done for every object in the collection. Any idea on how to create a unit test besides having it write to the database.
Thanks,
As mentioned in comments, you have an option to abstract the database operations by some interface. If you use ORM, you can implement generic repositories, if you use plain ADO.NET, you can implement something like transaction script.
Another option would be to use SQLite in-memory database. It is not clear what db interface you are using, but SQLite is supported by the majority of database access methods in .NET, including Entity Framework. This would not exactly be a unit test but it does the job.
As has been suggested in the comments, you have 2 choices
Create an abstraction for the actual writing to the database and verify the interactions with that abstraction are as you would expect for your method. This will give you fast unit tests but you will still have to write integration tests for the implementation of your abstraction which actually puts data in the database. To verify the interactions you can either use a mocking library or create an implementation of the interface just for testing with.
Create an integration test that writes to the database and verify that the data is inserted as you would expect. these tests will be slower, but will give you confidence that the data will actually be placed in the database.
My preference is for the second option, as this tests that the data will actually be persisted correctly, something you are going to have to do eventually, but not everyone likes to do this.
In my application I have multiple small entity framework dbcontexts which share the same database, for example:
public class Context1 : DbContext {
public Context1()
: base("DemoDb") {
}
}
public class Context2 : DbContext {
public Context2()
: base("DemoDb") {
}
}
All database updates are done via scripts and do not rely on migrations (nor will they going forward). The question is - how would you do integration testing against these contexts?
I believe there are three options here (there may be more I just don't know them)
Option 1 - Super context - a context which contains all models and configurations required for setting up the database:
public class SuperContext : DbContext
{
public SuperContext()
: base("DemoDb") {
}
}
In this option the test database would be setup against the super context and all subsequent testing would be done through the smaller contexts.
The reason I am not keen on this option is that I will be duplicating all the configurations and entity models that i have already built.
Option 2 - create a custom initialiser for integration tests that will run all the appropriate db initialisation scripts:
public class IntegrationTestInitializer : IDatabaseInitializer<DbContext> {
public void InitializeDatabase(DbContext context) {
/* run scripts to set up database here */
}
}
This option allows for testing against the true database structure but will also require updating everytime new db scripts are added
Option 3 - just test the individual contexts:
In this option one would just let EF create the test database based upon the context and all tests would operate within there own "sandbox".
The reason that I don't like this is that it doesn't feel like you would be testing against a true representation of the database.
I'm currently swaying towards options 2. What do you all think? Is there a better method out there?
I'm using integration testing a lot, because I still think it's the most reliable way of testing when data-dependent processes are involved. I also have a couple of different contexts, and DDL scripts for database upgrades, so our situations are very similar.
What I ended up with was Option 4: maintaining unit test database content through the regular user interface. Of course most integration tests temporarily modify the database content, as part of the "act" phase of the test (more on this "temporary" later), but the content is not set up when the test session starts.
Here's why.
At some stage we also generated database content at the start of the test session, either by code or by deserializing XML files. (We didn't have EF yet, but otherwise we would probably have had some Seed method in a database initializer). Gradually I started to feel misgivings with this approach. It was a hell of a job to maintain the code/XML when the data model or the business logic changed, esp. when new use cases had to be devised. Sometimes I allowed myself a minor corruption of these test data, knowing that it would not affect the tests.
Also, the data had to make sense, as in they had to be as valid and coherent as data from the real application. One way to ensure that is to generate the data by the application itself, or else inevitably you will somehow duplicate business logic in the seed method. Mocking real-world data is actually very hard. That's the most important thing I found out. Testing data constellations that don't represent real use cases isn't only a wast of time, it's false security.
So I found myself creating the test data through the application's front end and then painstakingly serializing this content into XML or writing code that would generate exactly the same. Until one day it occurred to me that I had the data readily available in this database, so why not use it directly?
Now maybe you ask How to make tests independent?
Integration tests, just as unit tests, should be executable in isolation. They should not depend on other tests, nor should they be affected by them. I assume that the background of your question is that you create and seed a database for each integration test. This is one way to achieve independent tests.
But what if there is only one database, and no seed scripts? You could restore a backup for each test. We chose a different approach. Each integration test runs within a TransactionScope that's never committed. It is very easy to achieve this. Each test fixture inherits from a base class that has these methods (NUnit):
[SetUp]
public void InitTestEnvironment()
{
SetupTeardown.PerTestSetup();
}
[TearDown]
public void CleanTestEnvironment()
{
SetupTeardown.PerTestTearDown();
}
and in SetupTeardown:
public static void PerTestSetup()
{
_tranactionScope = new TransactionScope();
}
public static void PerTestTearDown()
{
if (_tranactionScope != null)
{
_tranactionScope.Dispose(); // Rollback any changes made in a test.
_tranactionScope = null;
}
}
where _tranactionScope is a static member variable.
Option 2, or any variation thereof that runs the actual DB update scripts would be the best. Otherwise than this you are not necessarily integration testing against the same database you have in production (with respect to the schema, at least).
In order to address your concern about requiring updating every time new DB scripts are added, if you were to keep all the scripts in a single folder, perhaps within the project with a build action of "copy if newer", you could programmatically read each file and execute the script therein. As long as the place you're reading the files from is your canonical repository for the update scripts, you will never need to go in and make any further changes.
I have an EAV system that stores entities in a SQL database, fetches them out and stores them in the cache. The application is written using the repository pattern because at some point in the future we will probably switch to using a NOSQL database for serving some or all of the data. I use Ninject to fetch the correct repository at runtime.
A large part of the system's functionality is around storing, retrieving and querying data in an efficient and timely manner. There is not a huge amount of functionality that doesn't fall into the realm of data access or user interface.
I've read up on unit testing - I understand the theory but haven't put it into practice yet for a few reasons:
An entity consists of fieldsets, fields, values, each of which has many properties. Creating any large number of these in code in order to test would require a lot of effort.
Some of the most crucial parts of my code are in the repositories. For instance all of the data access goes through a single highly optimised method that fetches entities from the database or cache.
Using a test database feels like I'm breaking one of the key tenets of unit testing - no external dependencies.
In addition to this the way the repositories are built feels like it's tied into how the data is stored in SQL. Entities go in one table, fields in another, values in another etc. So I have a repository for each. It is my understanding though that in a document store database that the Entity, its field and values would all exist as a single object, removing the need for multiple repositories. I've considered making my data access more granular in order to move sections of code outside of the repository, but this would compound the problem by forcing me to write the repository interfaces in a way that is designed for retrieving data from SQL.
Question: Based on the above, should I accept that I cannot write unit tests for large parts of my code and just test the things I can?
should I accept that I cannot write unit tests for large parts of my code?
No, you shouldn't accept that. In fact, this is never the case - with enough effort, you can unit test pretty much anything.
Your problem boils down to this: your code relies upon a database, but you cannot use it, because it is an external dependency. You can address this problem by using mock objects - special objects constructed inside your unit test code that present themselves as implementations of database interfaces, and feed your program the data that is required to complete a particular unit test. When your program sends requests to these objects, your unit test code can verify that the requests are correct. When your program expects a particular response, your unit tests give it the response as required by your unit test scenario.
Mocking may be non-trivial, especially in situations when requests and responses are complex. Several libraries exist to help you out with this in .NET, making the task of coding your mock objects almost independent of the structure of the real object. However, the real complexity is often in the behavior of the system that you are mocking - in your case, that's the database. The effort of coding up this complexity is entirely on you, and it does consume a very considerable portion of your coding time.
It appears, when you say a "unit test", you really mean an "integration test". Because in a unit-test-world there is no database. If you expect to get or insert some data into the external resource, you just fake it (using mocks, stubs, fakes, spies etc)
should I accept that I cannot write unit tests for large parts of my
code and just test the things I can?
Hard to tell without seeing your code, but it sounds like you can easily unit test it. This is based on your use of the interfaces and the repository pattern. As long as a unit test is independent from other tests, tests only a single piece of functionality, small, simple, does not depend on any external resources - you are good to go.
Do not confuse this with integration and other types of testing. Those may involve real data and may be a bit trickier to write.
If you're using the proper Repository pattern testing is easy, because
Business layer knows ONLY about the repository interface which deals ONLY with objects known by the layer and doesn't expose ANYTHING related to the actual Db (like EF). Here's where you're using fakes implementing the repository interface.
Testing the db access means you're testing the Repo implementation, you get test objects in and out. It's natural for the Repository to be coupled with the db.
Test for repo should be something like this
public void add_get_object()
{
var entity=CreateSomeTestEntity();
_repo.Save(entity);
var loaded=_repo.Get(entity.Id);
//assert those 2 entities are identical, but NOT using reference
// either make it implement IEquatable<Entity> or test each property manually
}
These repo tests can be reused with ANY repository implementation: in memory, EF, raven db etc, because it shouldn't matter the implementation, it matters that the repo does what it's required to do (saving/loading business objects).