TDD on a configuration tool touching database - c#

I am working on writing a tool which
- sets up a connection to Sql and runs a series of stored procedures
- Hits the file system to verify and also delete files
- Talks to other subsystems through exposed APIs
I am new to the concept of TDD but have been doing a lot of reading on it. I wanted apply TDD for this development but I am stuck. There are a lot of interactions with external systems which need to be mocked/stubbed or faked. What I am finding difficult is the proper approach to take in doing this in TDD.. here is a sample of what I would like accomplished.
public class MyConfigurator
{
public static void Start()
{
CheckSystemIsLicenced(); // will throw if its not licenced. Makes call to a library owned by company
CleanUpFiles(); // clean up several directories
CheckConnectionToSql(); //ensure connection to sql can be made
ConfigureSystemToolsOnDatabase(); //runs a set of stored procedure. Range of checks are also implemented and will throw if something goes wrong.
}
}
After this I have another class which cleans up the system if things have gone wrong. For the purpose of this question, its not that relevant but it essentially will just clear certain tables and fix up database so that the tool can run again from scratch to do its configuration tasks.
It almost appears to be here that when using TDD the only tests I end up having are things like (assuming I am using FakeItEasy)
A.CallTo(()=>fakeLicenceChecker.CheckSystemIsLicenced("lickey")).MustHaveHappened();
It just is a whole lot of tests which just appear to be "MustHaveHappened". Am I doing something wrong? Is there a different way to start this project using TDD? Or is this a particular scenario where perhaps TDD is not really recommended? Any guidance would be greatly appreciated.

In your example, if the arrangement of the unit test shows lickey as the input, then it is reasonable to assert that the endpoint has been called with the proper value. In more complex scenarios, the input-to-assert flow covers more subsystems so that the test itself doesn't seem as trivial. You might set up an ID value as input and test that down the line you are outputting a value for an object that is deterministically related to the input ID.
One aspect of TDD is that the code changes while the tests do not - except for functionally equivalent refactoring. So your first tests would naturally arrange and assert data at the outermost endpoints. You would start with a test that writes a real file to the filesystem, calls your code, and then checks to see that the file is deleted as expected. Of course, the file system is a messy workspace for portable testing, so you might decide early on to abstract the file system by one step. Ditto with the database by using EF and mocking your DbContext or by using a mocked repository pattern. These abstractions can be pre-TDD application architecture decisions.
Something I do frequently is to use utility code that starts with an IFileSystem interface that declares methods that mimic a lot of what is available in System.IO.File. In production I use an implementation of IFileSystem that just passes through to File.XXX() methods. Then you can mock up and verify the interface instead of trying to setup and cleanup real files.

In this particular method the only thing you can test is that the methods were called. It's ok to do what you are doing by asserting the mock classes. It's up to you to determine if this particular test is valuable or not. TDD assumes tests for everything, but I find it to be more practical to focus your testing on scenarios where it adds value. Hard for others to make that determination, but you should trust yourself to make the call in each specific scenario.

I think integration tests would add the most bang for buck. Use the real DB and FileSystem.
If you have complex logic in the tool, then you may want to restructure the tool design to abstract out the DB and fileSystem and write the unit tests with mocks. From the code snippet you posted, it looks like a simple script to me.

Related

Unit Test principle

Hi guys I'm start learning Unit Test or TDD .
I had some experience on C# so I was start with follow link
https://msdn.microsoft.com/en-us/library/dd264975.aspx
follow the sample , so far so good.
But when I start first method by myself ,
I had some questions .
I wrote a method named "Log"
it will create a .txt file with timestamp
for example , after I called Log("some thing error");
it will created a file 201510210030.txt
the content is "[2015-10-21 00:30:47] some thing error"
How do I test it ?
may I read the log file ?
But every time I change the file name,
Maybe I will change the fold postion on the other situation.
Or maybe I will change log to DB or some logger server (with IoC)
How do I test it ? connect to DB or logger server ? read the file ?
It too much possible if the method fail(DB shut down or file auth fail balabala).
It not really "Unit" enough , so ... how do I test a method like this ,
Or just some concept I do not understand with Unit Test .
Thanks a lot .
With unit testing (as with all things), you need to be pragmatic. It's always great to aim for the 100% code coverage bench mark, but that's often-times unrealistic. The moment you start introducing actual databases or file systems into your tests, you've left the realm of unit tests and entered integration testing. There's absolutely nothing wrong with integration testing, but it's important not to confuse the two.
What I would recommend is to make sure you have all the logging logic in it's own separate class that is provided to the class you're testing via dependency injection (you mentioned IoC, so I assume you're familiar). Once you do that, you can pass in a mock "Logger" into your class that doesn't touch the file system at all. This will ensure that the class you're testing will handle anything that implements that "Logger"'s interface.
If you're looking to actually unit test the logger itself, then I'm afraid that's not really possible. The logger is very tightly coupled to the file system (or database), and so you can't really unit test it. You can always extract all the persistence logic to another class and mock that out, but you've only pushed the problem back. There's always a wall you'll come up against between your application and infrastructure, where the two are tightly coupled. The important thing is to keep the class that handles that interaction relatively simple, and make sure it's tested with integration tests.

Unit testing C# database dependent code

I would like to create unit tests for data dependent code. For example:
A user class that has the usual create, update & delete.
If I wanted to create a test for "user already exists" scenario, or and update or delete test. I would need to know that a specific user already exists in my database.
In such cases, what would be the best approach to have stand alone tests for these operations that can run in any order?
When you have dependencies like this think about whether you want to be Integration Testing as opposed to Unit Testing. If you do want to do Unit tests take a look at using Mock Data.
Integration Testing:
Tests how you code integrates with different parts of a system. This can be making sure your code connects to a database properly or has created a file on the filesystem. These tests are usually very straight-forward and do not have the same constraint of "being able to run in any order." However, they require a specific configuration in order to pass which means they do float well from developer to developer.
Unit Testing: Tests your code's ability to preform a function. For example "Does my function AddTwoNumbers(int one, int two) actually add two numbers?" Unit tests are to ensure any changes in code does not effect the expected results.
When getting into areas like "Does my code call the database any enter the result correctly?" you need to consider that unit tests are not meant to interact with the system. This is where we get into using "mock data." Mock classes and mock data take the place of an actual system to just ensure that your code "called out in the way we were expecting." The difficult part about this is it can be done but most of the .Net Framework classes do not provide the needed Interfaces in order to do it easily.
See the MSDN page on Tesing for more info. Also, consider this MSDN article on Mock Data.

TDD: .NET following TDD principles, Mock / Not to Mock?

I am trying to following TDD and I have come across a small issue. I wrote a Test to insert a new user into a database. The Insert new user is called on the MyService class, so I went ahead and created mytest. It failed and I started to implement my CreateUser method on my MyService Class.
The problem I am coming across is the MyService will call to a repository (another class) to do the database insertion.
So I figured I would use a mocking framework to mock out this Repository class, but is this the correct way to go?
This would mean I would have to change my test to actually create a mock for my User Repository. But is this recommended? I wrote my test initially and made it fail and now I realize I need a repository and need to mock it out, so I am having to change my test to cater for the mocked object. Smells a bit?
I would love some feedback here.
If this is the way to go then when would I create the actual User Repository? Would this need its own test?
Or should I just forget about mocking anything? But then this would be classed as an integration test rather than a unit test, as I would be testing the MyService and User Repository together as one unit.
I a little lost; I want to start out the correct way.
So I figured I would use a mocking framework to mock out this
Repository class, but is this the correct way to go?
Yes, this is a completely correct way to go, because you should test your classes in isolation. I.e. by mocking all dependencies. Otherwise you can't tell whether your class fails or some of its dependencies.
I wrote my test initially and made it fail and now I realize I need a
repository and need to mock it out, so I am having to change my test
to cater for the mocked object. Smells a bit?
Extracting classes, reorganizing methods, etc is a refactoring. And tests are here to help you with refactoring, to remove fear of change. It's completely normal to change your tests if implementation changes. I believe you didn't think that you could create perfect code from your first try and never change it again?
If this is the way to go then when would I create the actual User
Repository? Would this need its own test?
You will create a real repository in your application. And you can write tests for this repository (i.e. check if it correctly calls the underlying data access provider, which should be mocked). But such tests usually are very time-consuming and brittle. So, it's better to write some acceptance tests, which exercise the whole application with real repositories.
Or should I just forget about mocking anything?
Just the opposite - you should use mocks to test classes in isolation. If mocking requires lots of work (data access, ui) then don't mock such resources and use real objects in integration or acceptance tests.
You would most certainly mock out the dependency to the database, and then assert on your service calling the expected method on your mock. I commend you for trying to follow best practices, and encourage you to stay on this path.
As you have now realized, as you go along you will start adding new dependencies to the classes you write.
I would strongly advise you to satisfy these dependencies externally, as in create an interface IUserRepository, so you can mock it out, and pass an IUserRepository into the constructor of your service.
You would then store this in an instance variable and call the methods (i.e. _userRepository.StoreUser(user)) you need on it.
The advantage of that is, that it is very easy to satisfy these dependencies from your test classes, and that you can worry about instantiating of your objects, and your lifecycle management as a separate concern.
tl;dr: create a mock!
I have two set of testing libraries. One for UnitTests where I mock stuff. I only test units there. So if I would have a method of AddUser in the service I would create all the mocks I need to be able to test the code in that specific method.
This gives me a possibility to test some code paths that I would not be able to verify otherwise.
Another test library is for Integration tests or functional tests or whatever you want to call it. This one is making sure that a specific use case. E.g. Creating a tag from the webpage will do what i expect it to do. For this I use the sql server that shipps with Visual studio 2012 and after every test I delete the database and start over.
In my case I would say that the integration tests are much more important then the unit tests. This is because my application does not have so much logic, instead it is displaying data from the database in different ways.
Your initial test was incomplete, that's all. The final test is always going to have to deal with the fact the new user gets persisted.
TDD does not prescribe the kind of test you should create. You have to choose beforehand if it's going to be a unit test or some kind of integration test. If it's a unit test, then the use of mocking is practically inevitable (except when the tested unit has no dependencies to isolate from). If it's an integration test, then actual database access (in this case) would have to be taken into account in the test.
Either kind of test is correct. Common wisdom is that a larger unit test suite is created, testing units in isolation, while a separate but smaller test suite exercises whole use case scenarios.
Summary
I am a huge fan of Eiffel, but while the tools of Eiffel like Design-by-Contract can help significantly with the Mock-or-not-to-Mock question, the answer to the question has a huge management-decision component to it.
Detail
So—this is me thinking out loud as I ponder a common question. When contemplating TDD, there is a lot of twisting and turning on the matter of mock objects.
To Mock or Not to Mock
Is that the only binary question? Is it not more nuanced than that? Can mocks be approached with a strategy?
If your routine call on an object under test needs only base-types (i.e. STRING, BOOLEAN, REAL, INTEGER, etcetera) then you don't need a mock object anyhow. So, don't be worried.
If your routine call on an object under test either has arguments or attributes that require mock objects to be created before testing can begin then—that is where the trouble begins, right?
What sources do we have for constructing mocks?
Simple creation with:
make or default create
make with hand-coded base-type arguments
Complex creation with:
make with database-supplied arguments
make with other mock objects (start this process again)
Object factories
Production code based factories
Test code based factories
Data-repo based data (vs hand-coded)
Gleaned
Objects from prior bugs/errors
THE CHALLENGE:
Keeping the non-production test-code bloat to a bare minimum. I think this means asking hard but relevant questions before willy-nilly code writing begins.
Our optimal goal is:
No mocks needed. Strive for this above all.
Simple mock creation with no arguments.
Simple mock creation with base-type arguments.
Simple mock creation with DB-repo sourced base-type arguments.
Complex mock creation using production code object factories.
Complex mock creation using test-code object factories.
Objects with captured states from prior bugs/errors.
Each of these presents a challenge. As stated—one of the primary goals is to always keep the test code as small as possible and reuse production code as much as possible.
Moreover—perhaps there is a good rule of thumb: Do not write a test when you can write a contract. You might be able to side-step the need to write a mock if you just write good solid contract coverage!
EXAMPLE:
At the following link you will find both an object class and a related test class:
Class: https://github.com/ljr1981/stack_overflow_answers/blob/main/src/so_17302338/so_17302338.e
Test: https://github.com/ljr1981/stack_overflow_answers/blob/main/testing/so_17302338/so_17302338_test_set.e
If you start by looking at the test code, the first thing to note is how simple the tests are. All I am really doing is spinning up an instance of the class as an object. There are no "test assertions" because all of the "testing" is handled by DbC contracts in the class code. Pay special attention to the class invariant. The class invariant is either impossible with common TDD facilities, or nearly impossible. This includes the "implies" Boolean keyword as well.
Now—look at the class code. Notice first that Eiffel has the capacity to define multiple creation procedures (i.e. "init") without the need for a traffic-cop switch or pattern-recognition on creation arguments. The names of the creation procedures tell the appropriate story of what each creation procedure does.
Each creation procedure also contains its own preconditions and post-conditions to help cement code-correctness without resorting to "writing-the-bloody-test-first" nonsense.
Conclusion
Mock code that is test-code and not production-code is what will get you into trouble if you get too much of it. The facility of Design-by-Contract allows you to greatly minimize the need for mocks and test code. Yes—in Eiffel you will still write test code, but because of how the language-spec, compiler, IDE, and test facilities work, you will end up writing less of it—if you use it thoughtfully and with some smarts!

Unit Testing Database Methods

I am currently working on a c# project which makes use of an SQLite Database. For the project I am required to do unit testing but was told that unit testing shouldn't involve external files, like database files for the testing and instead the test should emulate the database.
If I have a function that tests if a something exists in a database how could this sort of method be tested with a unit testing.
in general it makes life easier if external files are avoided and everything is done in code. There are no rules which says "shouldn't", and sometimes it just makes more sense to have the external dependency. But only after you have considered how not to have it, and realized what the tradeoffs are.
Secondly, what Bryan said is a good option and the one I've used before.
In an application that uses a database, there will be at least one component whose responsibility is to communicate with that database. The unit test for that component could involve a mocked database, but it is perfectly valid (and often desirable) to test the component using a real database. After all, the component is supposed to encapsulate and broker communication with that database -- the unit test should test that. There are numerous strategies to perform such unit tests conveniently -- see the list of related SO questions in the sidebar for examples.
The general prescription to avoid accessing databases in unit tests applies to non-database components. Since non-database components typically outnumber database-related components by a wide margin, the vast majority of unit tests should not involve a database. Indeed, if such non-database components required a database to be tested effectively, there is likely a design problem present -- probably improper separation of concerns.
Thus, the principle that unit tests should avoid databases is generally true, but it is not an absolute rule. It is just a (strong) guideline that aids in structuring complex systems. Following the rule too rigidly makes it very difficult to adequately test "boundary" components that encapsulate external systems -- places in which bugs find it very easy to hide! So, the question that one should really be asking oneself when a unit test demands a database is this: is the component under test legitimately accessing the database directly or should it instead collaborate with another that has that responsibility?
This same reasoning applies to the use of external files and other resources in unit tests as well.
With SQLite, you could use an in-memory database. You can stage your database by inserting data and then run your tests against it.
Once databases get involved it always blurs the line between unit testing and integration testing. Having said that, it is always a nice and very useful test to be able to put something in a database (Setup), Do your test and remove it at the end (Cleanup). This lets you test end to end one part of your application.
Personally I like to do this in an attribute driven fashion. By Specifying the Sql scripts to run for each test as an attribute like so ..
[PreSqlExecute("SetupTestUserDetails.sql")]
[PostSqlExecute("CleanupTestUserDetails.sql")]
public void UpdateUserNameTest()
{
The connectionstrings come from the app.config as usual and can even be a symbolic link to the app.config in your main project.
Unfortunately this isn't a standard feature with the MS test runner that ships with visual studio. If you are using Postsharp as your AOP framework, this is easy to do. If not, you can still get the same functionality for standard MS Test Runner, by using a feature of .Net called "Context Bound Objects". This lets you inject custom code into an object creation chain to do AOP like stuff as long as your objects inherit from ContextBoundObject.
I did a blog post with more details and a small, complete code sample here.
http://www.chaitanyaonline.net/2011/09/25/improving-integration-tests-in-net-by-using-attributes-to-execute-sql-scripts/
I think is really bad idea to have unit tests that depends on database information.
Also I think is a bad idea to use sqlite for unit tests.
You need to test objects protocol, so if you need something in your tests you should create them somewhere in the tests (usually at setUp).
Since is difficult to remove persistence, the popular way to do it is using SQLite, but always create what you need in unit tests.
check this link Unit Tests And Databases this will be more helpful I think
It's best to use a mocking framework, to mimic a database. For C# there is the Entity Framework. Even the use of sqlite is an outside dependency to your code.

Refactoring strategy for the class which generates specific text file

I am a TDD noob and I don't know how to solve the following problem.
I have pretty large class which generates text file in a specific format, for import into the external system. I am going to refactor this class and I want to write unit tests before.
How should these tests look like? Actually the main goal - do not break the structure of the file. But this does not mean that I should compare the contents of the file before and after?
I think you would benefit from a test that I would hesitate to call a "unit test" - although arguably it tests the current text-file-producing "unit". This would simply run the current code and do a diff between its output and a "golden master" file (which you could generate by running the test once and copying to its designated location). If there is much conditional behavior in the code, you may want to run this with several examples, each a different test case. With the existing code, by definition, all the tests should pass.
Now start to refactor. Extract a method - or better, write a test for a method that you can envision extracting, a true unit test - extract the method, and ensure that all tests, for the new small method and for the bigger system, still pass. Lather, rinse, repeat. The system tests give you a safety net that lets you go forward in the refactoring with confidence; the unit tests drive the design of the new code.
There are libraries available to make this kind of testing easier (although it's pretty easy even without them). See http://approvaltests.sourceforge.net/.
In such a case I use the following strategy:
Write a test for each method (just covering its default behavior without any error handling etc.)
Run a code coverage tool and find the blocks not covered by the tests. Write tests covering these blocks.
Do this until you get a code coverage of over 80%
Start refactoring the class (mostly generate smaller classes following the separation of concern principle).
Use Test Driven Development for writing the new classes.
Actually, that's a pretty good place to start (comparing a well known output against what is being generated by the current class).
If the single generator class can produce different results, then create one for each case.
This will ensure that you are not breaking your current generator class.
One thing that might help you is if you have the specification document for the current class. You can use that as the base of your refactoring effort.
If you haven't yet, pick up a copy of Michael Feathers' book "Working Effectively with Legacy Code". It's all about how to add tests to existing code, which is exactly what you're looking for.
But until you finish reading the book, I'd suggest starting with a regression test: create the class, have it write the file to disk, and then compare that file to a "known good" file that you've stashed in your source repository somewhere. If they don't match, fail the test.
Then start looking at the interesting decisions that your class makes. See how you can get them under test. Maybe you extract some complicated if-conditions into public functions that return bool, and you write a battery of tests to prove that, given the right inputs, that function returns the right value. Maybe generation of a particular string has some interesting logic; start testing it.
Along the way, you may find objects that want to get out. For example, you may find that the code (or the tests!) would be simpler if there was a separate class that generates a single line of output. Go with it. You've got your regression test to catch you if you screw anything up.
Work relentlessly to remove dependencies (but make sure you've got a higher-level test, like a regression test, to catch you if you make mistakes). If your class creates its own FileStream and writes to the filesystem, change it to take a TextWriter in its constructor instead, so you can write tests that pass in a StringWriter and never touch the file system. Once that's done, you can get rid of the old test that writes a file to disk (but only if you didn't break it while trying to write the new test!) If your class needs a database connection, refactor until you can write a test that passes in fake data. Etc.

Categories

Resources