my first question on stackoverflow and I'm hoping it's not a stupid one. :)
Basically all my datadriven tests look like this one:
[TestMethod]
[DataSource(TestsDataSource)]
public void Test_With_Some_Fancy_Name()
{
if (myIsThisTestIgnored) return;
...
DataSource attribute + app.config connectionstring + excel odbc driver = excel datadriven tests.
myIsThisTestIgnored is set in the [TestInitialize] method (for every datarow there is a separate test with separate [TestInitialize]), via TestContext.DataRow["Ignore"], so I can "ignore" tests via some true/false in an excel sheet.
Now my problem is: just returning from a test method lets the test pass. How to not let the ignored tests "pass" in the MSTEST testrunner (and in the CI via msbuild), or even better, do not show them at all?
On our Ci Build, it sums up every ignored (a.k.a passed) test, as the count of test is datarows x number of testmethods, giving a faulty impression of the number of tests that ran. But, until now, I found no way to just set the test result to "ignored" (or any other state than "passes"/"failed") programmatically.
Besides, getting rid of this annoying ignore line in every test and enhancing the assert messages without wrapping every assert in some function would be nice, too.
So the best/only approach to this, I thought, would be AOP style programming and using custom attributes. I found this old post about enhancing your mstest code with custom attributes, good read: http://callumhibbert.blogspot.com/2008/01/extending-mstest.html
Specifically the last comment in the linked article got me thinking, as the guy not just made my day by showing how to access all test state data to the attribute (the MarshalObject stuff), but somehow he got it working, quoting him "In this manner, I can determine whether or not to run a test based on a matrix of data driven information from the text context (using the DataSource attribute), and the new custom attributes I created. ". Yeah.. but how to do that?
I took some of the code and implemeted it, looking something like that:
public class MyCustomAspect : TestAspect<MyCustomAttribute>, IMessageSink, ITestAspect
{ ....
[SecurityPermission(SecurityAction.LinkDemand, Flags = SecurityPermissionFlag.Infrastructure)]
public IMessage SyncProcessMessage(IMessage msg)
{
....
// see last comment in linked article to how to get this MarshalObject
MyTestsClass testClass = (MyTestsClass)MarshalObject;
if(testClass.myTestInitializeRan && testClass.myIsThisTestIgnored)
{
// what to do here? I would like to get rid of the SyncProcessMessage invocation, but...
// return msg or null -> Exception "The method was called with a Message of an unexpected type."
// constructing a IMessage in an easy way (specifically CallMessage) seems not possible, and I have no clue how to construct a valid CallMessage that says "ignore test!"
}
else
{
IMessage message = _nextSink.SyncProcessMessage(msg);
return message;
}
Again: I want to suppress the testrun for this datarow, or set the testresult somehow to "ignore"/"unknown".
Doing anything with the returned message or msg seems pointless, as the IMessage Interface just exposes the Properties Dictionary which has only some MethodName and stuff in it, nothing interesting it seems.
Casting IMessage to concrete implementations doesn't work: they are all internal to System.Runtime. Though one can cast to ResultMessage and it returns something != null when an exception was raised (so you can take the exception message and adjust error message via new throw).
Any Ideas? And many thanks for reading so far. ;)
How to not let the ignored tests "pass" in the MSTEST testrunner (and in the CI via msbuild), or even better, do not show them at all?
You could use Assert.Fail or Assert.Inconclusive instead of simply returning from the test case if you do not want skipped tests to be accounted for as passed.
You may also find Data driven tests with test attributes in MSTest useful. The author showcases how to use postsharp to create parameterised Row tests in order to skip the need of using excel for data driven tests and instead post the parameters as an attribute above the test method. Therefore when you would like to remove or ignore a test case you would simply comment out that data row and it would not be shown at all.
Related
What is the best way to unit test a method that doesn't return anything? Specifically in c#.
What I am really trying to test is a method that takes a log file and parses it for specific strings. The strings are then inserted into a database. Nothing that hasn't been done before but being VERY new to TDD I am wondering if it is possible to test this or is it something that doesn't really get tested.
If a method doesn't return anything, it's either one of the following
imperative - You're either asking the object to do something to itself.. e.g change state (without expecting any confirmation.. its assumed that it will be done)
informational - just notifying someone that something happened (without expecting action or response) respectively.
Imperative methods - you can verify if the task was actually performed. Verify if state change actually took place. e.g.
void DeductFromBalance( dAmount )
can be tested by verifying if the balance post this message is indeed less than the initial value by dAmount
Informational methods - are rare as a member of the public interface of the object... hence not normally unit-tested. However if you must, You can verify if the handling to be done on a notification takes place. e.g.
void OnAccountDebit( dAmount ) // emails account holder with info
can be tested by verifying if the email is being sent
Post more details about your actual method and people will be able to answer better.
Update: Your method is doing 2 things. I'd actually split it into two methods that can now be independently tested.
string[] ExamineLogFileForX( string sFileName );
void InsertStringsIntoDatabase( string[] );
String[] can be easily verified by providing the first method with a dummy file and expected strings. The second one is slightly tricky.. you can either use a Mock (google or search stackoverflow on mocking frameworks) to mimic the DB or hit the actual DB and verify if the strings were inserted in the right location. Check this thread for some good books... I'd recomment Pragmatic Unit Testing if you're in a crunch.
In the code it would be used like
InsertStringsIntoDatabase( ExamineLogFileForX( "c:\OMG.log" ) );
Test its side-effects. This includes:
Does it throw any exceptions? (If it should, check that it does. If it shouldn't, try some corner cases which might if you're not careful - null arguments being the most obvious thing.)
Does it play nicely with its parameters? (If they're mutable, does it mutate them when it shouldn't and vice versa?)
Does it have the right effect on the state of the object/type you're calling it on?
Of course, there's a limit to how much you can test. You generally can't test with every possible input, for example. Test pragmatically - enough to give you confidence that your code is designed appropriately and implemented correctly, and enough to act as supplemental documentation for what a caller might expect.
As always: test what the method is supposed to do!
Should it change global state (uuh, code smell!) somewhere?
Should it call into an interface?
Should it throw an exception when called with the wrong parameters?
Should it throw no exception when called with the right parameters?
Should it ...?
Try this:
[TestMethod]
public void TestSomething()
{
try
{
YourMethodCall();
Assert.IsTrue(true);
}
catch {
Assert.IsTrue(false);
}
}
Void return types / Subroutines are old news. I haven't made a Void return type (Unless I was being extremely lazy) in like 8 years (From the time of this answer, so just a bit before this question was asked).
Instead of a method like:
public void SendEmailToCustomer()
Make a method that follows Microsoft's int.TryParse() paradigm:
public bool TrySendEmailToCustomer()
Maybe there isn't any information your method needs to return for usage in the long-run, but returning the state of the method after it performs its job is a huge use to the caller.
Also, bool isn't the only state type. There are a number of times when a previously-made Subroutine could actually return three or more different states (Good, Normal, Bad, etc). In those cases, you'd just use
public StateEnum TrySendEmailToCustomer()
However, while the Try-Paradigm somewhat answers this question on how to test a void return, there are other considerations too. For example, during/after a "TDD" cycle, you would be "Refactoring" and notice you are doing two things with your method... thus breaking the "Single Responsibility Principle." So that should be taken care of first. Second, you might have idenetified a dependency... you're touching "Persistent" Data.
If you are doing the data access stuff in the method-in-question, you need to refactor into an n-tier'd or n-layer'd architecture. But we can assume that when you say "The strings are then inserted into a database", you actually mean you're calling a business logic layer or something. Ya, we'll assume that.
When your object is instantiated, you now understand that your object has dependencies. This is when you need to decide if you are going to do Dependency Injection on the Object, or on the Method. That means your Constructor or the method-in-question needs a new Parameter:
public <Constructor/MethodName> (IBusinessDataEtc otherLayerOrTierObject, string[] stuffToInsert)
Now that you can accept an interface of your business/data tier object, you can mock it out during Unit Tests and have no dependencies or fear of "Accidental" integration testing.
So in your live code, you pass in a REAL IBusinessDataEtc object. But in your Unit Testing, you pass in a MOCK IBusinessDataEtc object. In that Mock, you can include Non-Interface Properties like int XMethodWasCalledCount or something whose state(s) are updated when the interface methods are called.
So your Unit Test will go through your Method(s)-In-Question, perform whatever logic they have, and call one or two, or a selected set of methods in your IBusinessDataEtc object. When you do your Assertions at the end of your Unit Test you have a couple of things to test now.
The State of the "Subroutine" which is now a Try-Paradigm method.
The State of your Mock IBusinessDataEtc object.
For more information on Dependency Injection ideas on the Construction-level... as they pertain to Unit Testing... look into Builder design patterns. It adds one more interface and class for each current interface/class you have, but they are very tiny and provide HUGE functionality increases for better Unit-Testing.
You can even try it this way:
[TestMethod]
public void ReadFiles()
{
try
{
Read();
return; // indicates success
}
catch (Exception ex)
{
Assert.Fail(ex.Message);
}
}
it will have some effect on an object.... query for the result of the effect. If it has no visible effect its not worth unit testing!
Presumably the method does something, and doesn't simply return?
Assuming this is the case, then:
If it modifies the state of it's owner object, then you should test that the state changed correctly.
If it takes in some object as a parameter and modifies that object, then your should test the object is correctly modified.
If it throws exceptions is certain cases, test that those exceptions are correctly thrown.
If its behaviour varies based on the state of its own object, or some other object, preset the state and test the method has the correct Ithrough one of the three test methods above).
If youy let us know what the method does, I could be more specific.
Use Rhino Mocks to set what calls, actions and exceptions might be expected. Assuming you can mock or stub out parts of your method. Hard to know without knowing some specifics here about the method, or even context.
Depends on what it's doing. If it has parameters, pass in mocks that you could ask later on if they have been called with the right set of parameters.
What ever instance you are using to call the void method , You can just use ,Verfiy
For Example:
In My case its _Log is the instance and LogMessage is the method to be tested:
try
{
this._log.Verify(x => x.LogMessage(Logger.WillisLogLevel.Info, Logger.WillisLogger.Usage, "Created the Student with name as"), "Failure");
}
Catch
{
Assert.IsFalse(ex is Moq.MockException);
}
Is the Verify throws an exception due to failure of the method the test would Fail ?
The question is whether it is bad practice to have a method in your Application project that has no other purpose than to generate data for your test project.
I have a unit test that I am using to do a cursory exam to ensure that all valid inputs will run through the primary method of my application without any errors at all. I essentially run a method to pull every valid input out of the database and then run each of those through the primary method of the application. If it fails, a bool is set to false.
The sample of the code I am using to do this is below. The question is whether there is a better way to do this that won't require me to add anything to the Application code. The below method requires me to have a method(TestMethod) in the Application project that pulls all valid parameters in order to run them through the primary method(CheckAvailability) in the test project.
public void SomeUnitTest()
{
Availability Availability = new Availability();
List<TestParam> paramList = new List<TestParam>();
bool success = true;
bool expected = true;
//This method pulls every valid param from my database.
paramList = Availability.TestMethod();
//This foreach loop runs each one of those valid params through another method. If there is an error,
//success is set to false otherwise it remains true.
foreach (TestParam s in paramList)
{
try
{
InputWrapper Wrapper = new InputWrapper();
Wrapper.ApplicationName = s.APPname;
Wrapper.Location = s.APPLocation;
Availability.CheckAvailability(Wrapper);
}
catch(Exception)
{
success = false;
}
//I then assert that success remains true. If it is false, it means that
//the method failed.
Assert.AreEqual(expected, success);
}
}
You seem to be under the impression that a datasource for tests is a questionable thing.
Let me start off by saying that this is not the case at all, but, (there's always a but) you should keep in mind that a good unit test is very easily written (and - maybe more importantly - read) so ideally you should have as few layers in your unit tests as possible.
This brings you to a predicament: do I make sure that anyone who reads my test can just look at the test method and knows everything that will be put in motion, or do I add some layers to keep the tests clean but also add complexity?
As is the case with many things: you will have to compromise. I would argue that you could go up to 2 or 3 layers of complexity in your unit tests, but it definitely shouldn't be more than that.
In your example this would mean that we can add a layer of complexity by extracting our testing data to keep it separated from the actual tests.
This will not be that big of a burden on understand the test but it will make it clear to write and maintain them.
Another aspect of your question that raised a few eyebrows: you're talking about having the test data in a database.
This is not your production database, is it? If it is: stop testing against live data. You need absolute control over your tests to ensure that no test data is changed and that the environment doesn't change without you knowing (aside from potential interruptions with the actual production data).
There is no need to use a boolean variable either: a thrown exception will automatically result in a faulty test.
I go into more detail on these things here and here, please read through them and feel free to ask any follow up questions.
Personally, I don't believe it is. The purpose of unit testing is to evaluate whether your units of functionality are fit for their purpose. To that end, it's not important where your data is coming from unless that is the responsibility of the method you are trying to test.
That being said, I would lean more towards including a set of sample data in the project to read from, since it allows you to modify the test data you're using more readily.
I have a class thats sole purpose is to run a method on other classes from an Interface.
Testing the interface of the classes is no problem, but the runner doesn't actually DO anything and (as it stands) the only parameter passed into the constructor is kept private.
In my case, the classes are importing text files into a database.
internal DataImporter
{
private List<IFileImporter> _importers;
public DataImporter(List<IFileImporter> importers){
_importers = importers;
public bool RunImporters()
{
//foreach importer, call its "Run" method - each one then does whatever it needs to do
//however, this need not call a specific "Run" method on IFileImporter
//I have another app that uses IFileImporter to check for presence of a file first
//then allow user to choose to import or not.
}
It seems to me there is nothing to test here? I can't test the value of _importers, and I don't want to make it public JUST for the sake of testing. DataImporter is specific to this instance, so creating an interface seems to add no benefit.
I've re-used IFileImporters elsewhere, but this is the only "bulk" importer, others are called manually from a winforms app, still others are not in this project at all.
So, do I need to test this...what CAN I test about this?
In a nutshell, yes. I can think of a number of tests right off the top of my head.
Your test ensures that all importers are called by mocking out IFileImporters you pass in the constructor. At a minimum it asserts that what you pass in the constructor are actually used by the method.
A test to ensure that if any importer that raises an exception, your class behaves as you expect.
A test should assert the behaviour if the list is empty. (default to True return?)
A test should also assert the behaviour you expect if one or more importers fail. (Are you and-ing or or-ing your importer results for your RunImporters result?)
Does the method run every importer regardless of if a previous fails, or returns false on the first failure?
There should also be a test on your constructor or Assertion for if its provided a null list.
You can assert that the methods have been called. You can refer to these questions to see how that's done:
How to assert if a method was called within another method in RhinoMocks?
Rhino Mocks - How to assert a mocked method was called n-times?
You can also assert that no exception was encountered within the DataImporterclass.
If you have absolute control over the interface used it might be helpful to have each one of them return a boolean indicating whether the specific operation was successful or not.
Your method returns bool and that's where you start, for example:
What happens when there is no importers? Example: Run_ReturnsFalse_WhenThereIsNoImporters
What happens when one of them breaks (or throws)? Example: Run_ReturnsFalse_WhenAtLeastOneImporterFails
What is indicating success (returning true, I suppose)? Example: Run_ReturnsTrue_WhenAllImportersSucceed
You setup your mocks to simulate each of those scenarios and test agains that. When there's trully no observable effects (like returned value), you'll do it by verifying that calls were made on mocked objects.
i have a small web application which does some simple data io via a gui front end.
so when the use hits 'save', the data is saved. On a runtime exception, i catch the exception, log it, and display a 'sorry etc.' label.
so as this code normally does not fail, and especially not when i want it to, there is no way for me to see if my 'sorry' label shows up and the 'succesfully saved' label is hidden.
Is there a nice solution for this?
the only way i could think of is to create a stub which throws an exception and via ioc load that stub when i want to check the 'fail' scenario's.
this creates some extra work, and does not help me when i first want to test the failure of step 1, and then the succes of step 1 and the failure of step 2.
I now do it manualy with inserting (and deleting) some throw exception commands, but that of course isn't very reliable.
what you should do is decouple the logic you are executing from the front end as much as possible. Then you can write a test which checks that the logic is executed properly independently of the UI.
You could start by writing a class which takes an object representing the data to be saved and an interface representing the dependency which will do the saving (and perhaps a interface which represents the logging dependency). Then you can write tests which pass a mock dependency which is configured to throw exceptions at the correct points, and invalid data which you can check the validation of. You could also validate that the logging was performed as you expected. This class could return an object which represents the success or otherwise of the function and you could show the message or not based on that returned object. If you are using MVC then you could validate theat the correct view is returned in the case of an error instead, but if not, then you are a bit stuck with the UI logic as far as testing goes and your only real option is to make the logic there as simple as possible.
It is indeed possible to write a test which verifies that a specific error was thrown. Look at http://www.nunit.org/index.php?p=exception&r=2.4 - the MSTest equivilant is ExpectedExceptionAttribute if that's your testing frameword :o)
Use RhinoMocks and code something like this:
[Test]
public void Test()
{
//Mocking is initialized
MockRepository mockRepository = new MockRepository();
//Mock of interface is created
ISomeClass someClass = mockRepository.Stub<ISomeClass>();
//Setup that the method call results in an exception being thrown.
someClass.Stub(x=> x.DoSomeThing()).Throw(new Exception());
//Mock is "activated"
mockRepository.ReplayAll();
//Here the method is called and it does indeed throw an exception
Assert.Throws<Exception>(someClass.DoSomeThing);
}
How might I go about writing a unit test which will test the ability of a method to add a record to a database? Currently, I have the following:
[TestMethod]
public void AddUserTest()
{
Boolean expected = true;
Boolean result = UserManager.AddUser(test);
Assert.AreEqual(expected, result);
}
This works appropriately if I'm only testing the ability to add a record to the database (without worrying if the record already exists). However, I'm not sure how to author the test such that it will still pass if the submission fails due to a pre-existing record.
If it makes a difference, I'm using LINQ to SQL for my database transactions. From what I could gather in the MSDN Documentation, DataContext.SubmitChanges() has no return value, so I'm also unsure how to determine if a particular transaction was successful.
I'll keep looking through the documentation. Perhaps DataContext.SubmitChanges() throws an exception upon record conflict or other failure that I could be catching in the unit test?
As soon as an external agent is present (such as file system, database etc.) it's really an integration test.
Your AddUserTest above is flawed: the AddUser method could return true without adding anything or not adding it correctly and still return true! In that case, you have not accurately tested anything.
Write an integration test that adds data to the database, then retrieves it and compares the two sets of values for equality.
I agree with Mitch (this is really an integration test).
To handle your scenario you can add an ExpectionException attribute to the test. This means that is you receive the expected exception then the test still passes.
[TestMethod()]
[ExpectedException(typeof(System.Data.Linq.DuplicateKeyException))]
public static void MyTest()
{
....
}
If you need to check the message of the exception, there is a useful overload that allows you to specify the expected string (see http://msdn.microsoft.com/en-US/library/ms243315.aspx)
public ExpectedExceptionAttribute(
Type exceptionType,
string noExceptionMessage
)
which would be used
[ExpectedException(typeof(DuplicateKeyException), "Something went wrong")]
If you start a transaction before the test runs and then abort the transaction after the test runs then you will never have pre-existing data rows.
Use [TestInitialize] and [TestCleanup] for before and after run; respectively.
See http://msdn.microsoft.com/en-us/library/system.transactions.transactionscope.aspx
You'll get a primary key violation exception thrown at you if the record exists. You could avoid the exception by changing UserManager.AddUser to check for this case explicitly by selecting the record out of the database before insert (see Linq's Any() or SingleOrDefault()). If a record comes back, explicitly return false.
Or wrap all the code in AddUser in a try/catch block. Return false in the catch.
Heck, do both.
Also, why not check the test database to see if the record was inserted in the test?? I wouldn't take the method under test's word for success/failure, shouldn't you verify this?
Unit testing with a real database is not encouraged by unit testing purists. We actually unit test most of our DB layer's LinqToSql code with a mock database. Google "Irepository linq to sql unit test" for a bunch of results. I took a bunch of the ideas from the slew of blog posts that will be returned in that search and made a system that's worked out surprisingly well (but not without some challenges).