Does anything described below exist?
Hi, I'm a c# and javascript programmer. When creating tests the pain point for me is the creation of the test dependencies. Especially when I am making assertions against values that I expect in the database.
I know that writing tests that make calls to the database is a bad practice since many database calls can slow down the entire test suite. The alternative is we as developers must create these large sometimes complicated mock objects that contain the values that the database would otherwise be returning.
Instead I would like to create my tests against an actual database. Then I would like for my test running application or testing framework to make note of the object returned from the database. The testing framework would replace the dependency on the database with an automatically created stub object for all subsequent runs of this test.
Essentially the database would only get hit the very first time a test is run and from that point forward it would instead use that data it retrieved from the first pass of the test as the stub or mock object.
This would entirely mitigate the need to ever manually create an object for the purpose of testing.
You could use AutoFixture to create the data.
It does have some support for data annotations, but you'd probably still need to tweak it extensively to fit your particular database schema.
Related
I'm trying to make unit tests for a query. Inside this query I call an database function. This function is a CTE (Common-Table-Expression). For testing purposes I want to mock this CTE.
My question is, how am I mocking this database function in entity-framework-core?
Long story short - You can't. You can only mock function C# related data.
Testing code that uses EF Core
It is very important to understand that EF Core is not designed to abstract every aspect of the underlying database system. Instead, EF Core is a common set of patterns and concepts that can be used with any database system.
BUT - Depending on the type of tests you are doing:
Running queries and updates against the same database system used in production.
Running queries and updates against some other easier to manage database system.
Using test doubles or some other mechanism to avoid using a database at all.
You could setup the data (1 or 2 - for the 3rd one it's irrelevant), so that your function returns the proper data. You can always do that on for the 1st choice, but not always for the 2nd (as the easier to manage database, will not always support the same functions).
Maybe with some code and a bit more context, we can help more.
I am trying to write a unit test for a method that saves multiple records into a database. The method is passed a collection argument. The method loops through the collection. The userid associated with the object is set and the record is updated in the the database. This is done for every object in the collection. Any idea on how to create a unit test besides having it write to the database.
Thanks,
As mentioned in comments, you have an option to abstract the database operations by some interface. If you use ORM, you can implement generic repositories, if you use plain ADO.NET, you can implement something like transaction script.
Another option would be to use SQLite in-memory database. It is not clear what db interface you are using, but SQLite is supported by the majority of database access methods in .NET, including Entity Framework. This would not exactly be a unit test but it does the job.
As has been suggested in the comments, you have 2 choices
Create an abstraction for the actual writing to the database and verify the interactions with that abstraction are as you would expect for your method. This will give you fast unit tests but you will still have to write integration tests for the implementation of your abstraction which actually puts data in the database. To verify the interactions you can either use a mocking library or create an implementation of the interface just for testing with.
Create an integration test that writes to the database and verify that the data is inserted as you would expect. these tests will be slower, but will give you confidence that the data will actually be placed in the database.
My preference is for the second option, as this tests that the data will actually be persisted correctly, something you are going to have to do eventually, but not everyone likes to do this.
I have an EAV system that stores entities in a SQL database, fetches them out and stores them in the cache. The application is written using the repository pattern because at some point in the future we will probably switch to using a NOSQL database for serving some or all of the data. I use Ninject to fetch the correct repository at runtime.
A large part of the system's functionality is around storing, retrieving and querying data in an efficient and timely manner. There is not a huge amount of functionality that doesn't fall into the realm of data access or user interface.
I've read up on unit testing - I understand the theory but haven't put it into practice yet for a few reasons:
An entity consists of fieldsets, fields, values, each of which has many properties. Creating any large number of these in code in order to test would require a lot of effort.
Some of the most crucial parts of my code are in the repositories. For instance all of the data access goes through a single highly optimised method that fetches entities from the database or cache.
Using a test database feels like I'm breaking one of the key tenets of unit testing - no external dependencies.
In addition to this the way the repositories are built feels like it's tied into how the data is stored in SQL. Entities go in one table, fields in another, values in another etc. So I have a repository for each. It is my understanding though that in a document store database that the Entity, its field and values would all exist as a single object, removing the need for multiple repositories. I've considered making my data access more granular in order to move sections of code outside of the repository, but this would compound the problem by forcing me to write the repository interfaces in a way that is designed for retrieving data from SQL.
Question: Based on the above, should I accept that I cannot write unit tests for large parts of my code and just test the things I can?
should I accept that I cannot write unit tests for large parts of my code?
No, you shouldn't accept that. In fact, this is never the case - with enough effort, you can unit test pretty much anything.
Your problem boils down to this: your code relies upon a database, but you cannot use it, because it is an external dependency. You can address this problem by using mock objects - special objects constructed inside your unit test code that present themselves as implementations of database interfaces, and feed your program the data that is required to complete a particular unit test. When your program sends requests to these objects, your unit test code can verify that the requests are correct. When your program expects a particular response, your unit tests give it the response as required by your unit test scenario.
Mocking may be non-trivial, especially in situations when requests and responses are complex. Several libraries exist to help you out with this in .NET, making the task of coding your mock objects almost independent of the structure of the real object. However, the real complexity is often in the behavior of the system that you are mocking - in your case, that's the database. The effort of coding up this complexity is entirely on you, and it does consume a very considerable portion of your coding time.
It appears, when you say a "unit test", you really mean an "integration test". Because in a unit-test-world there is no database. If you expect to get or insert some data into the external resource, you just fake it (using mocks, stubs, fakes, spies etc)
should I accept that I cannot write unit tests for large parts of my
code and just test the things I can?
Hard to tell without seeing your code, but it sounds like you can easily unit test it. This is based on your use of the interfaces and the repository pattern. As long as a unit test is independent from other tests, tests only a single piece of functionality, small, simple, does not depend on any external resources - you are good to go.
Do not confuse this with integration and other types of testing. Those may involve real data and may be a bit trickier to write.
If you're using the proper Repository pattern testing is easy, because
Business layer knows ONLY about the repository interface which deals ONLY with objects known by the layer and doesn't expose ANYTHING related to the actual Db (like EF). Here's where you're using fakes implementing the repository interface.
Testing the db access means you're testing the Repo implementation, you get test objects in and out. It's natural for the Repository to be coupled with the db.
Test for repo should be something like this
public void add_get_object()
{
var entity=CreateSomeTestEntity();
_repo.Save(entity);
var loaded=_repo.Get(entity.Id);
//assert those 2 entities are identical, but NOT using reference
// either make it implement IEquatable<Entity> or test each property manually
}
These repo tests can be reused with ANY repository implementation: in memory, EF, raven db etc, because it shouldn't matter the implementation, it matters that the repo does what it's required to do (saving/loading business objects).
I've been writing some integration tests recently against ASP.Net MVC controller actions and have been frustrated by the difficulty in setting up the test data that needs to be present in order to run the test.
For example, I want to test the "add", "edit" and "delete" actions of a controller. I can write the "add" test fine, but then find that to write the "edit" test I was am either going to have to call the code of the "add" test to create a record so that I can edit it, or do a lot of setup in the test class, neither of which are particularly appealing.
Ideally I want to use or develop an integration test framework to make it easier to add seed data in a reusable way for integration tests so that the arrange aspect of an arrange/act/assert test can focus on arranging what I specifically need to arrange for my test rather than concerning itself with arranging a load of reference data only indirectly related to the code under the test.
I happen to be using NHibernate but I believe any data seeding functionality should be oblivious to that and be able to manipulate the database directly; the ORM may change, but I will allways be using a SQL database.
I'm using NUnit so envisage hooking into the test/testfixture setup/teardown (but I think a good solution would potentially transferable to other test frameworks).
I'm using FluentMigrator in my main project to manage schema and seeding of reference data so it would be nice, but not essential to be able to use the FluentMigrator framework for a consistent approach across the solution.
So my question is, "How do you seed your database data for integration testing in C#?" Do you execute the SQL directly? Do you use a framework?
You can make your integration testing on Sql Server Compact, you will have a .sdf file and you can connect to it giving the file's path as connection string. That would be faster and easier to setup and work with.
Your integration test would not probably need millions of rows of data. You can insert your test data into your database and save it as TestDbOriginal.sdf.
When you are running your tests, just make a copy of this 'TestDbOriginal.sdf' and work on that copy, which is already seeded with data. If you want to test a specific scenario, you will need to prepare your data by calling some methods like add, remove, edit .
When you go production or performance testing, switch back to your original server version, be it Sql Server 2008 or whatever.
I don't know if it's necessarily the 'right' thing to do, but I've always seeded using my add/create method(s).
I am unit testing some CRUD operations. My questions are:
1) If I need to test the Add, Get, and Delete methods. The persistence layer is a database. Since I need to have a test object to Get and Delete, should I combine all 3 of these into one [TestMethod], or separate these into 3 methods and re-add the object before the Get and Delete tests?
Ideally you should have individual tests for each case.
You should use some sort of mocking - either via a framework or by setting up the database yourself - to set the initial conditions for each test.
So to test add you would start with a blank database and then add some new data, try to add the same data again (it should fail), add incomplete data etc.
Then to test get and delete you would start with a pre-populated database and perform the various tests you need.
I'd generally make a separate test. If I'm testing a "get" type method, the test setup would insert the object (generally by way of some mock framework) I expect to get as necessary, it just wouldn't be asserted against in the same way the actual get would.
This does mean that if the add implementation breaks, both the tests for the get and the add will fail in one way or another, assuming proper coverage. But that's kind of what you want anyhow, right?
If you are writing you own ORM to handle CRUD, I suggest you separate each action in a different test. Don't create big tests, that has many points of failure and many reasons to change, because it will turn your test project hard to maintain. Test each feature separately.
Now, if you are using some third part ORM to deal with CRUD, you should not test the tool at all, unless you don't trust it. But, in this case, you should find a better alternative. :)
You can do some Acceptance Tests to check if everything is working and, at this time, you will really reach the database.
Whatever makes testing easier for you :) as long as you get a return stating which method passed/failed then it should be good.
Hello :) I'd separate and test individually, and set up cleanly before and clear up after each.