Integration testing in .net with nhibernate - c#

I've been doing some honest-to-God-TDD for my latest project, and am loving it, having previously done unit testing but not serious TDD I'm finding it helpful.
Some background on my project:
ASP.Net Front End-
NHibernate for database interaction with SQL Server 2008 express-
C# 4.0 DLL's for DOmain Logic and Unit Tests which are done in NUnit and ran through resharper.
Teamcity CI server running a NAnt build script.
I'm in a sort of 'alpha' release now, and am finding all the bugs are integration bugs, mainly as my integration testing has been manual use of the site, and some minor automated stuff (which I've stopped running). This is pretty poor given how strictly I've nurtured my test suite and I want to rectify it.
My question is, what is the best way to do integration tests, or is there any articles I can read. I understand that testing the UI is going to be a pain in ASP.NET Webforms (will move to a more testable framework in future, but one step at a time). But I want to make sure my interactions with hibernate are tested correctly.
So I need some tips on integration testing in relation to
Nhibernate (caching, sessions etc)-
Test data, I've found 'NDBUnit' is that what i should be looking at using to get my data in a good state? Is that compatible with NHibernate?
Should I swap the database out for SQLite or something? Or just setup another SQL server DB which holds test data only?
How can I make these tests maintainable? I had a few integration tests but they caused me hassles and found myself avoiding them. I think this was mainly due to me not setting a consistent state each time.
Just some general advice too would be great, I've read TDD by example by Kent Beck and The Art of Unit Testing by Roy Osherove and they were great for unit testing /tdd but I would love to read a little more about integration tests and strategies for writing them (what you should test etc) ---

Concerning the Database part:
- You may use it directly along the lines of this article: Unit testing with built-in NHibernate support.
- Using SQLite to speed up the tests can also be very useful. Only that in this case you should be aware that you're not really targeting the real production configuration anymore. - SQLite does not support all features that SQL Server does. This article shows a code sample for the necessary test setup to switch to SQLite - it's quite easy and straightforward.
- NDbUnit as a mechanism to have the test data prepared is also a good choice - as long as you don't expect to have frequent schema changes on your Db (in this case it becomes quite a lot of work to maintain all the related xml files...).
Concerning the ASP.NET part:
- You may find a tool like Selenium helpful for UI-driven tests.
HTH!
Thomas

A bit late to the party, but this is what I've been doing for a while.
I am writing REST API's, which are to be consumed by our mobile apps. The mobile apps are also written in C#, so it makes sense for me to write an API wrapper (SDK).
When integration testing, I set up test cases that tests all endpoints of the API, using the SDK. When running the tests, the API is running on my local IIS, in development mode. Everytime the server is started, my dev database is wiped, recreated, and seeded with data for all tables, giving me a somewhat realistic scenario. I also don't have to worry about testing updates/deletes, because all it takes is a rebuild of the server project, and NHibernate will drop, create and seed my database. This could be changed to every request if desired.
When testing my repositories, it's important for me to know if my queries are translateable by NHibernate, so all my repository tests are using LocalDB, which is recreated for every single test case. That means every test case can set up the required seed data for the query tests to succeed.
Another thing, when seeding your database with realistic data, you are also getting your foreign-key setups tested for free. Also, the seeder is using your domain classes, so it look's good, too!
An example of a seeder:
public void Seed(ISession s)
{
using(var tx = s.BeginTransaction()
{
var account1 = new Account { FirstName = "Bob", LastName = "Smith" };
var account2 = new Account { FirstName = "John", LastName = "Doe" };
account1.AddFriend(account2); // manipulates a friends collection
s.Save(account1);
}
}
You should call the seeder when creating your session factory.
Important: setting this up is done with an IoC container.

The short answer is: don't do integration tests.
We have the following setup:
Our unit tests test as little as possible. We only test one function of the business code (not directly a method, but more a logical piece of functionality);
Write a unit test for every function of your business code;
Write a unit test for interaction between different functions of your business code;
Make sure these tests cover everything.
This will give you a set of unit tests that cover everything.
We do have integration tests, but these are written up in Word documents and are often just the original specs. A QA person runs these when the code is delivered and most of the times it just works, because we've already tested every little piece.
On InfoQ, there is a terrific presentation which explains this way of working very well: Integration Tests Are a Scam.
One thing about testing NHibernate. We have applied the repository pattern. What this does is that our NHibernate queries become very very simple, like:
public class NhRepository<T> : IRepository<T>
{
public T Get(int id)
{
return GetSession().Get<T>(id);
}
}
public interface IUserRepository : IRepository<User>
{
IEnumerable<User> GetActiveUsers();
}
public class UserRepository : NhRepository<User>, IUserRepository
{
public IEnumerable<User> GetActiveUsers()
{
return
from user in GetSession().Users
where user.IsActive
return user;
}
}
This in combination with the Windsor IoC container provides our data access. The thing with this setup is that:
The queries are incredibly simple (98% of them anyway) and we don't test them thoroughly. That may sound strange, but we tend to test these more using peer review than any other mechanism;
These repositories can be easily mocked. This means that for the above repository, we have a mock that does something like this:
var repositoryMock = mocks.StrictMock<IUserRepository>();
var activeUsers = new List<User>();
for (int i = 0; i < 10; i++)
{
var user = UserMocks.CreateUser();
user.IsActive = true;
activeUsers.Add(user);
}
Expect.Call(repositoryMock.GetActiveUsers()).Return(activeUsers);
The actual code is a lot more concise, but you get the idea.

Related

Fail the unit tests when the database schema, and entity are changed, but unit tests are not changed

Following is the exact scenario in my application:
There are several C# methods in the codebase which are using Entity
framework to talk with SQL database.
Unit tests are written against all methods, and covers all possible permutation and combinations based on method signature, input requirements, and return values.
Unit tests are working fine, and are failing when they should (i.e., cases like some validation is changed or expected return value is changed, but unit tests are not reflected for the same).
There are cases, where a developer performs a change in SQL schema, and updates the entity in the C# code. In this case, unit tests are passing which is absolutely fine because its just underlying logic is changed, but not the input, validations, or return value.
However, I want some unit tests to be failed when the database schema and entity are changed, but unit tests are not changed. That means, I want developers to fix the unit tests when they change database schema and entity.
Can anyone please suggest how to achieve the same?
Any help on this would be much appreciated.
Thanks
I moved up this last paragraph because it appears to the the X of your XY problem. I do suggest reading all other paragraphs because you seem to be misunderstanding what a unit test is, and what purpose it serves.
Unit tests do not test multiple codebases!
I just noticed this in the comment you wrote:
So if a developer of one project changes the database schema, we want to alert them that this schema change should be reflected in other project as well.
Unit tests should not be used as a way to synchronize two otherwise completely unrelated codebases.
This is a simple application of DRY vs WET: if these application share the same database schema, the database schema (and related EF entities) should be in a single source which both applications depend on.
Don't develop the same thing twice because you end up with constantly having to sync one when the other changes (which is what you're dealing with now).
For example, you can host the "shared data" project on a Nuget server, and have both applications refer to that self-hosted Nuget packages. When confirgured correctly, whenever an application is built, it will fetch the latest version of the "shared data" project, thus ensuring that both applications always work with the latest version of the database schema.
A second way to do this (if you don't want to create a shared dependency) is to have your subversioning system observe certain files (e.g. the entity class folder) and alert you if any change is made in those files. When you receive an alert, you'll be aware to check if this change impacts the other application's codebase.
And if I understand you correctly, you're really just trying to get an alert, right?
There are other ways to solve this, and the decision depends on your infrastructure and what technologies your team uses. But it should definitely not be done by manually scripting/faking unit test failures.
Don't forget the "unit" in "unit test"
I want some unit tests to be failed when the database schema and entity are changed
Unit tests test one thing (hence "unit"). It seems like you're writing tests that test two things: the business logic and the database. That's an integration test. Just as a rule of thumb:
Unit tests test one component.
Integration tests test the interaction between multiple components.
Don't test external dependencies
If you were to write a unit test for EF specifically (which you shouldn't, but more on that later), you would not actually involve an actual database in the test. At best, you'd assert the SQL that is generated by EF, without running that query on the database.
Repeating my point, unit tests test one component. Entity Framework and the database server are two different things.
When using external components (such as a database server), you are again delving into the realm of integration tests.
Only test your own code
As mentioned before, you shouldn't be testing Entity Framework. Unit testing a library is basically doing the library developer's work. You shouldn't occupy yourself how a library works. If anything, you use a library specifically because you don't want to know how it works internally.
Consider this test:
public void TestForAddition()
{
var expectedValue = 2;
var testValue = 1 + 1;
Assert.AreEqual(testValue,expectedValue);
}
What you're doing here is testing the + operator, which is provided by the C#.Net framework. That is not part of your code, and therefore you shouldn't be testing it. The idea is the same as the EF argument: it's not your code, and not up to you to test. You have to trust in the dependencies you use.
However, if you have overloaded the + operator, you should test that (because it's your code)
public static Person operator+ (Person a, Person b)
{
return new Person() { Name = a.Name + " " + b.Name };
}
public void TestForPersonAddition()
{
var expectedValue = "Billie Jean";
var billie = new Person() { Name = "Billie" };
var jean = new Person() { Name = "Jean" };
Assert.AreEqual(billie + jean,expectedValue);
}
What this means for your EF-centric example is that you should not be unit testing Entity Framework. You should only have unit tests for your code, not that of EF.
You can, however, write integration tests for this. And this can achieve what you want: you run the integration tests against an existing database. If the integration tests pass, you'll know that the new codebase is compatible with the existing database. If they don't, then you know an issue has occurred that warrants a developer's attention.
Unit tests test behavior!
However, I want some unit tests to be failed when the database schema and entity are changed, but unit tests are not changed. That means, I want developers to fix the unit tests when they change database schema and entity.
So you want your developers to fix unit tests that were never broken in the first place?
Your intention doesn't make sense to me. You've changed some code and the unit tests are not failing. That is a good thing. But you want to make them fail, so your developers are then forced to "fix" the unit tests (that weren't broken in the first place).
If the unit tests pass and yet you still claim that there is something wrong in the behavior of the application, then your unit tests are not doing the job they're supposed to do.
You're misusing unit tests. The issue isn't trying to figure out how to force unit tests to fail; the issue is what you're expecting a unit test's outcome to tell you.
Unit tests should not be used to test whether the internal system has changed. Unit tests test whether the application still behaves the same way (even if some of the code has changed).
If I rewrite your codebase from scratch, and then run the same (untouched) unit tests on my new codebase, and all unit tests pass, that means that my code behaves the same way as yours and our codebases are functional equivalents of each other (at least in regards to the behavior that you've written tests for)

Unit Testing Entity Framework Repository

I am working with TDD and all is going well. When I get to my actual Repository though I don't know how to test this.
Consider the code below - this is what I know I want to write but how do I tackle this in a test first way without doing integration testing.
public class UserDb : IUserDb
{
public void Add(User user)
{
using (var context = new EfContext())
{
context.Users.Add(user);
context.SaveChanges();
}
}
}
The link below by forsvarir is what I want to put as an answer. How I can do that?
http://romiller.com/2012/02/14/testing-with-a-fake-dbcontext/
The usual answer to these kinds of questions is:
Isolate the hard to test dependencies (the EF context)
Provide a fake implementation in your tests to verify the correct behaviour of the SUT
This all makes sense when you have interesting logic to test in your system under test. For your specific case, the repository looks like a very thin abstraction between your clean domain and EF-aware logic. Good, keep 'em that way. This also means that it's not really doing much work. I suggest that you don't bother writing isolated unit tests (wrapping EF DbContext seems like extra work that might not pull its weight).
Note that I'm not saying you shouldn't test this code: I often tend to test these thin repositories using a real database, i.e. through integrated tests. For all the code that uses these repositories however, I'd stick to isolated unit testing by providing fake repositories to the system under test. That way I have coverage on my repository code and test that EF is actually talking to my database in the correct way and all other tests that indirectly use these repositories are nice, isolated and lightning-fast.
What are you hoping to achieve by testing the 3rd party tool? You could mock out the context var fakeContext = A.Fake<IDbContext>(); and then assert that an attempt was made to write to the database. A.CallTo(() => fakeContext.SaveChanges()).MustHaveHappened();
The above example uses FakeItEasy mocking library.

Writing Integration tests to test database, web service calls

We are just starting to write integration tests to test database,data access layer, webservice calls etc.
Currently I have some idea to write integration tests like
1) Always recreating tables in initialize function.
2) Always clear the data inside a function, if you are saving new inside the same function.
But I would like to know some more good practices.
As with all testing, it is imperative to start from a known state, and upon test completion, clear down to a clean state.
Also, test code often gets overlooked as not real code and hence not maintained properly... it is more important than code. At least as much design, should go into the architecture of your tests. Plan reasonable levels of abstraction, ie if you are testing a web app, consider having tiers like so: an abstraction of your browser interaction, an abstraction of your components on pages, an abstraction of pages and your tests. Tests interact with pages and components, pages interact with components, components interact with the browser interaction tier and the browser interaction tier interacts with your (possibly third party) browser automation library.
If your test code is not maintained properly or well-thought out, they become a hindrance more than an aid to writing new code.
If your team is new to testing, there are many coding katas out there that aim to teach the importance of good tests (and out of this comes good code), they generally focus on a unit testing level, however many of the principals are the same.
In general, I would suggest you look into mocking your database access layer and web service call classes in order to make it more testable. There are lots of books on this subject, like Osherove's The Art of Unit Testing.
That having been said, integration tests should be repeatable. Thus, I would choose option 1, to write a script that can recreate the database from scratch with test data. Option 2 is more difficult, because it can hard to be sure that the cleaning function does not leave unwanted data residue.
When unit testing the DAL, I do it like this:
[TestFixtureSetUp]
public void TestFixtureSetUp()
{
//this grabs the data from the database using an XSD file to map the schema
//and saves it as xml (backup.xml)
DatabaseFixtureSetUp();
}
[SetUp]
public void SetUp()
{
//this inserts test data into the database from xml (testdata.xml)
//it clears the tables first so you have fresh data before every test.
DatabaseSetUp();
}
[TestFixtureTearDown]
public void TestFixtureTearDown()
{
//this clears the table and inserts the backup data into the database
//to return it to the state it was before running the tests.
DatabaseFixtureTearDown();
}

Unit Testing Database Methods

I am currently working on a c# project which makes use of an SQLite Database. For the project I am required to do unit testing but was told that unit testing shouldn't involve external files, like database files for the testing and instead the test should emulate the database.
If I have a function that tests if a something exists in a database how could this sort of method be tested with a unit testing.
in general it makes life easier if external files are avoided and everything is done in code. There are no rules which says "shouldn't", and sometimes it just makes more sense to have the external dependency. But only after you have considered how not to have it, and realized what the tradeoffs are.
Secondly, what Bryan said is a good option and the one I've used before.
In an application that uses a database, there will be at least one component whose responsibility is to communicate with that database. The unit test for that component could involve a mocked database, but it is perfectly valid (and often desirable) to test the component using a real database. After all, the component is supposed to encapsulate and broker communication with that database -- the unit test should test that. There are numerous strategies to perform such unit tests conveniently -- see the list of related SO questions in the sidebar for examples.
The general prescription to avoid accessing databases in unit tests applies to non-database components. Since non-database components typically outnumber database-related components by a wide margin, the vast majority of unit tests should not involve a database. Indeed, if such non-database components required a database to be tested effectively, there is likely a design problem present -- probably improper separation of concerns.
Thus, the principle that unit tests should avoid databases is generally true, but it is not an absolute rule. It is just a (strong) guideline that aids in structuring complex systems. Following the rule too rigidly makes it very difficult to adequately test "boundary" components that encapsulate external systems -- places in which bugs find it very easy to hide! So, the question that one should really be asking oneself when a unit test demands a database is this: is the component under test legitimately accessing the database directly or should it instead collaborate with another that has that responsibility?
This same reasoning applies to the use of external files and other resources in unit tests as well.
With SQLite, you could use an in-memory database. You can stage your database by inserting data and then run your tests against it.
Once databases get involved it always blurs the line between unit testing and integration testing. Having said that, it is always a nice and very useful test to be able to put something in a database (Setup), Do your test and remove it at the end (Cleanup). This lets you test end to end one part of your application.
Personally I like to do this in an attribute driven fashion. By Specifying the Sql scripts to run for each test as an attribute like so ..
[PreSqlExecute("SetupTestUserDetails.sql")]
[PostSqlExecute("CleanupTestUserDetails.sql")]
public void UpdateUserNameTest()
{
The connectionstrings come from the app.config as usual and can even be a symbolic link to the app.config in your main project.
Unfortunately this isn't a standard feature with the MS test runner that ships with visual studio. If you are using Postsharp as your AOP framework, this is easy to do. If not, you can still get the same functionality for standard MS Test Runner, by using a feature of .Net called "Context Bound Objects". This lets you inject custom code into an object creation chain to do AOP like stuff as long as your objects inherit from ContextBoundObject.
I did a blog post with more details and a small, complete code sample here.
http://www.chaitanyaonline.net/2011/09/25/improving-integration-tests-in-net-by-using-attributes-to-execute-sql-scripts/
I think is really bad idea to have unit tests that depends on database information.
Also I think is a bad idea to use sqlite for unit tests.
You need to test objects protocol, so if you need something in your tests you should create them somewhere in the tests (usually at setUp).
Since is difficult to remove persistence, the popular way to do it is using SQLite, but always create what you need in unit tests.
check this link Unit Tests And Databases this will be more helpful I think
It's best to use a mocking framework, to mimic a database. For C# there is the Entity Framework. Even the use of sqlite is an outside dependency to your code.

Unit testing rules

My question may appear really stupid for some of you but i have to ask.. Sorry..
I don't really understand principles of unit testing.. How can you test classes of your business classes or Data access layer without modify your database?
I explain, i have a functionality who update a field in a database.. Nothing so amazing..
The Business layer class is instantiated and the method BLL.Update() makes some controls and finally instantiate a DAL class who launch a stored procedure in the database with the correct parameters.
Its works but my question is..
To do unit tests who test the DALayer class i have to impact the database in the tests! To test for example if the value 5 is well passed to the DataBase i have to do that and database's field will be 5 after the test!
So i know normally the system is not impacted by tests so i don't understand how you can do tests without without execute methods..
Tx for answers and excuse my poor English..
I will divide your question into several sub questions because it is hard to answer them together.
Unit testing x Integration testing
When you write unit test you are testing simple unit. That means you are testing single execution path in tested method. You need to avoid testing its dependencies like mentioned database. You usually write simple unit test for each execution path so that you have good code coverage by your tests.
When you write integration test you are testing all layers to see if integration and configuration works. You usually don't write integration test for each execution path because there is to many combination accross all layers.
Testing business classes - unit testing
You need to test your business classes without dependency to DAL and DB. To do that you have to design your BL class so that those dependencies are injected from outside. First you need to define abstract class or interface for DAL and pass that DAL interface as parameter to constructor (another way is to expose setable property on BL class). When you test your BL class you will use another implementation of DAL interface which is not dependent on DB. There are well known test patterns Mock, Stub and Fake which defines how to create and use those dummy implementations. Mocking is also supported by many testing frameworks.
Testing data access layer - integration testing
You need to test your DAL against real DB. You will prepare testing DB with test set of data and you will write your tests to work with that data. Each test will run in its own transaction which will be rolled back at the end so it will not modify initial data set.
Best regards, Ladislav
For the scenario you describe in terms of db interaction, mocking is useful. If you have a chance, take a look at Rhino Mocks
You use Inversion of Control together with a mocking framework, e.g. Rhino Mocks as someone already mentioned
If you are not resorting to mocking and using actual DB in testing then it will be integration testing in layman terms and it's not a unit test anymore. I worked on a project where a dedicated sql .mdf was in source control that was attached to database server using NUnit in [Setup] part of [SetupFixture], similary detached in [TearDown]. This was done everytime NUnit test were performed and could be very time consuming depending upon the SQL code you have as well as size of data can make worse.
Now the catch is the maintenance overhead, you may change the DB scehma during you sprint cycle and at relaease, the change DB script has to be run on all databases used in your development and testing including the one used for integration testing as mentioned above. Not just that, new test data (as someone mentioned above) has to be popoulated for new tables/columns created and likewise, the exisitng data may need to be cleansed as well owing to requirements changes or bug fixes.
This seems to be a task in itself and someone competent in team can take the ownership or if time permits you can integrate the execution of change scripts as part of Continuous Integration if you already implemented one. Still the adding and cleansing the test data need to be taken care of manually.

Categories

Resources