I'm attempting to learn TDD but having a hard time getting my head around what / how to test with a little app I need to write.
The (simplified somewhat) spec for the app is as follows:
It needs to take from the user the location of a csv file, the location of a word document mailmerge template and an output location.
The app will then read the csv file and for each row, merge the data with the word template and output to the folder specified.
Just to be clear, I'm not asking how I would go about coding such an app as I'm confident that I know how to do it if I just went ahead and started. But if I wanted to do it using TDD, some guidance on the tests to write would be appreciated as I'm guessing I don't want to be testing reading a real csv file, or testing the 3rd party component that does the merge or converts to pdf.
I think just some general TDD guidance would be a great help!
I'd start out by thinking of scenarios for each step of your program, starting with failure cases and their expected behavior:
User provides a null csv file location (throws an ArgumentNullException).
User provides an empty csv file location (throws an ArgumentException).
The csv file specified by the user doesn't exist (whatever you think is appropriate).
Next, write a test for each of those scenarios and make sure it fails. Next, write just enough code to make the test pass. That's pretty easy for some of these conditions, because the code that makes your test pass is often the final code:
public class Merger {
public void Merge(string csvPath, string templatePath, string outputPath) {
if (csvPath == null) { throw new ArgumentNullException("csvPath"); }
}
}
After that, move into standard scenarios:
The specified csv file has one line (merge should be called once, output written to the expected location).
The specified csv file has two lines (merge should be called twice, output written to the expected location).
The output file's name conforms to your expectations (whatever those are).
And so on. Once you get to this second phase, you'll start to identify behavior you want to stub and mock. For example, checking whether a file exists or not - .NET doesn't make it easy to stub this, so you'll probably need to create an adapter interface and class that will let you isolate your program from the actual file system (to say nothing of actual CSV files and mail-merge templates). There are other techniques available, but this method is fairly standard:
public interface IFileFinder { bool FileExists(string path); }
// Concrete implementation to use in production
public class FileFinder: IFileFinder {
public bool FileExists(string path) { return File.Exists(path); }
}
public class Merger {
IFileFinder finder;
public Merger(IFileFinder finder) { this.finder = finder; }
}
In tests, you'll pass in a stub implementation:
[Test]
[ExpectedException(typeof(FileNotFoundException))]
public void Fails_When_Csv_File_Does_Not_Exist() {
IFileFinder finder = mockery.NewMock<IFileFinder>();
Merger merger = new Merger(finder);
Stub.On(finder).Method("FileExists").Will(Return.Value(false));
merger.Merge("csvPath", "templatePath", "outputPath");
}
Simple general guidance:
You write unit tests first. At the beginning
they all fail.
Then you go into the class under test
and write code until tests related to
each method pass.
Do this for every public method of
your types.
By writing units test you actually specify the requirements but in another form, easy to read code.
Looking to it from another angle: when you receive a new black boxed class and unit tests for it, you should read the unit tests to see what the class does and how it behaves.
To read more about unit testing I recommend a very good book: Art Of Unit Testing
Here are a couple links to articles on StackOverflow regarding TDD for more details and examples:
Link1
Link2
To be able to unit test you need to decouple the class from any dependencies so you can effectively just test the class itself.
To do this you'll need to inject any dependencies into the class. You would typically do this by passing in an object that implements the dependency interface, into your class in the constructor.
Mocking frameworks are used to create a mock instance of your dependency that your class can call during the test. You define the mock to behave in the same way as your dependency would and then verify it's state at the end of the test.
I would recommend having a play with Rhino mocks and going through the examples in the documentation to get a feel for how this works.
http://ayende.com/projects/rhino-mocks.aspx
Related
I have written a code for uploading a file to a cloud storage. That code consist of a post method which takes a file in the format : multipart/form-data and uploads it to cloud. So how can I write a test case for the same? To be specific, how to mock that post file.
The format used is formfile
The code is in asp.net core 3.1 using c#.
i suggest the use of abstraction here , you should create an interface like Below -
Public Interface IFileSystem{
bool uploadFile(FileStrem fileContent);
}
and then write your cloud specific implementation in a derivation from above class and
in your buisness logic use that implementation , so that when you write tests for buisness logic you can mock the FileSystem.
In the companies where I worked, 'Mocking' has never really seen too well.
Usually writing a test implementation works better:
public interface IUploader
{
IResult Upload(string filePath);
}
public sealed class FileUploader
{
IResult Upload(string filePath) { ... } // method is implemented as expected
}
public sealed class TestUploader //this is placed in a specific folder for test implementations
{
public IResult ExpectedResult { get; set; } = Result.Success();
IResult Upload(string filePath) => ExpectedResult;
}
To connect it with your class check the Humble Object Design Pattern. It explains how to extract testable logic from apparently untestable classes.
void UploadToCloud(string filePath)
{
...
Uploader.Upload(filePath); // real implementation in production code, test implementation during unit testing.
...
}
I'd prefer going this way for multiple reasons:
A change in the production code forces you to change the test implementation as well. With a Mock you would risk not to notice that your tests became obsolete.
A test implementation gives you more freedom (i.e. you can pass the test result to the test implementation in the initialization part of the test, to check edge cases).
It is cleaner than mocks and other developers can understand better what is going on. The tidier the code, the easier it is to keep it clean.
You still have some constraints given from the interface. Sometimes the Mock classes just become huge monsters that do not look like the original class anymore.
NOTE: If you want to test the receiving part, you can use the same strategy. Your test implementation will contain the expected content of the file, to test the cloud response.
I have a website built in ASP.NET MVC that uses an XML file as the backbone for its menu system. The nodes in that XML file list the menu items, their display text, and the controller name and action method that is called. I came up with an idea that if I could call a controller method with the controller name and action method name I could programmatically test all of the methods to see if they throw errors. I don't want to display the page in the browser; just run the controller method and any errors will be logged in my database. I would like to do it with something like this code from a Test model class, but I can't get this to compile yet. The RenderAction is a method that is working in my View pages, but the model says that method doesn't exist. Can somebody guide me how to get this to work?
HtmlHelper oHtmlHelper = new HtmlHelper(oViewContext, oViewDataContainer);
oHtmlHelper.RenderAction(sController, sAction);
The whole point of testing is to drive you to better, more maintainable code. Part of that is unit testing, which by nature, tests discreet units of functionality. What you have here is a situation that is absolute begging to be refactored. When things are difficult to test, that's a sure sign that your design is flawed.
Instead of having all this logic in an action, and then attempt to test the whole action rendering logic to determine if it's working or not, break out the logic into a helper class. Then, you should be able to easily test whether a method in that class returns a good result.
I'm not sure what you mean by "complex unit testing structure", as unit tests are inherently simple and pretty much everything you need is baked into .NET (although better options often exist as third-party libraries). Regardless, all you need to get started unit testing is to create a unit test project, add a reference to the project you want to test on, and then add a simple class like:
[TestClass]
public class MyAwesomeTests
{
[TestMethod]
public void TestSomething()
{
...
}
}
The classical solution if you have a concrete implementation of a dependency in a class you want to test is: add a layer of indirection where you have full control.
So it comes to adding more and more of indirection layers (interfaces that can be stubs/mocks in unit tests).
But: somewhere in my tests I must have the "real" implementation of my dependency. So what about this? Test? Don't test?
Take this example:
I had some dependencies on paths that I need in my application. So I extracted an interface, IPathProvider (that I can fake in my tests). Here is the implementation:
public interface IPathProvider
{
string AppInstallationFolder { get; }
string SystemCommonApplicationDataFolder { get; }
}
The concrete implementation PathProvider looks like this:
public class PathProvider : IPathProvider
{
private string _appInstallationFolder;
private string _systemCommonApplicationDataFolder;
public string AppInstallationFolder
{
get
{
if (_appInstallationFolder == null)
{
try
{
_appInstallationFolder =
Path.GetDirectoryName(Assembly.GetEntryAssembly().Location);
}
catch (Exception ex)
{
throw new MyException("Error reading Application-Installation-Path.", ex);
}
}
return _appInstallationFolder;
}
}
public string SystemCommonApplicationDataFolder
{
get
{
if (_systemCommonApplicationDataFolder == null)
{
try
{
_systemCommonApplicationDataFolder =
Environment.GetFolderPath(Environment.SpecialFolder.CommonApplicationData);
}
catch (Exception ex)
{
throw new MyException("Error reading System Common Application Data Path.", ex);
}
}
return _systemCommonApplicationDataFolder;
}
}
}
Do you test such code and if, how?
That PathProvider class seems a lot like a database repository class - it integrates with an external system (e.g., filesystem, database, etc).
These classes live at the boundaries of your system - other classes depend on them, and they depend solely on an external system. They connect your system to the external world.
As such, they're subject to integration testing - not unit testing.
I usually just tag my tests (e.g. with xUnit's Trait attribute) to keep them separate from regular tests, and so I can disable them when running in isolation (e.g. without a database). An alternative would be to separate them into a whole new project in the same solution, but I personally think that's overkill.
[Fact, Trait("Category", "Integration")]
public void ReturnsCorrectAppInstallationFolder()
{
// Arrange
var assemblyFilename = Path.GetFilename(typeof(SomeType).Assembly.Location);
var provider = new PathProvider();
// Act
var location = provider.AppInstallationFolder;
// Assert
Assert.NotEmpty(Directory.GetFiles(location, assemblyFilename))
}
The answer depends on your definition of 'unit'. In the case where a class is used as a unit then the answer is yes. However, testing individual classes has very limited business value unless you're developing a class library of some kind.
Another alternative it to define a unit as being a user-level (or system) requirement. A test for this kind of unit will be along the lines of: given user input X, test for system output Y. Units defined in this way have definite, measurable business value that can be traced to user requirements.
For example, when using user stories you may have a requirements such as:
As an administrator, I want to install the package to a specific folder.
This may have a number of associated scenarios to cover different permissible and prohibited folders. Each of these will have a unit test that provides the user input and verifies the output.
Given: The user has specified C:\Program Files\MyApp as the install
folder.
When: The installer has run.
Then: The C:\Program Files\MyApp exists on the drive and all of the
application files are present in that location.
When unit testing at a class interface level as you appear to be doing, you can easily end up with your software design tightly coupled to your unit test framework and implementation, rather than to your users' requirements.
The two answers have a lot of value, but I'll add one more thing, which I believe to be of big importance.
I have a feeling that people often get tied up in discussions about what should tested and what should not. That something should be treated as unit or it shouldn't. That it's an integration or not.
Generally, the goal of unit tests is to make sure our code works as intended. IMO we should test everything (whatever we are able to), that can get broken. Organizing it into some formal type of test and naming it is needed, but is of secondary importance to the test itself. My advice would be, that if you have any doubts, always ask yourself the question: "Am I afraid that it can be broken?". So... "where to stop"? Where you are not afraid of the functionality getting broken.
And back to your specific case. As #dcastro noticed, this does look a lot as a case of integration testing, rather than classic logic unit testing. Regardless of how we name it though - can it be broken? Yes it can. Should it be tested?
It MAY be tested, although I'd be far from naming it crucial. There is little own added logic and the complexity is low. Would I test it? Yes, if I had the time to do so. How would I do it? #dcastro 's example code is the way I'd approach the problem.
I have a method that takes 5 parameters. This method is used to take a bunch of gathered information and send it to my server.
I am writing a unit test for this method, but I am hitting a bit of a snag. Several of the parameters are Lists<> of classes that take some doing to setup correctly. I have methods that set them up correctly in other units (production code units). But if I call those then I am kind of breaking the whole idea of a unit test (to only hit one "unit").
So.... what do I do? Do I duplicate the code that sets up these objects in my Test Project (in a helper method) or do I start calling production code to setup these objects?
Here is hypothetical example to try and make this clearer:
File: UserDemographics.cs
class UserDemographics
{
// A bunch of user demographic here
// and values that get set as a user gets added to a group.
}
File: UserGroups.cs
class UserGroups
{
// A bunch of variables that change based on
// the demographics of the users put into them.
public AddUserDemographicsToGroup(UserDemographcis userDemographics)
{}
}
File: UserSetupEvent.cs
class UserSetupEvent
{
// An event to record the registering of a user
// Is highly dependant on UserDemographics and semi dependant on UserGroups
public SetupUserEvent(List<UserDemographics> userDemographics,
List<UserGroup> userGroups)
{}
}
file: Communications.cs
class Communications
{
public SendUserInfoToServer(SendingEvent sendingEvent,
List<UserDemographics> userDemographics,
List<UserGroup> userGroups,
List<UserSetupEvent> userSetupEvents)
{}
}
So the question is: To unit test SendUserInfoToServer should I duplicate SetupUserEvent and AddUserDemographicsToGroup in my test project, or should I just call them to help me setup some "real" parameters?
You need test duplicates.
You're correct that unit tests should not call out to other methods, so you need to "fake" the dependencies. This can be done one of two ways:
Manually written test duplicates
Mocking
Test duplicates allow you to isolate your method under test from its dependencies.
I use Moq for mocking. Your unit test should send in "dummy" parameter values, or statically defined values you can use to test control flow:
public class MyTestObject
{
public List<Thingie> GetTestThingies()
{
yield return new Thingie() {id = 1};
yield return new Thingie() {id = 2};
yield return new Thingie() {id = 3};
}
}
If the method calls out to any other classes/methods, use mocks (aka "fakes"). Mocks are dynamically-generated objects based on virtual methods or interfaces:
Mock<IRepository> repMock = new Mock<IRepository>();
MyPage obj = new MyPage() //let's pretend this is ASP.NET
obj.IRepository = repMock.Object;
repMock.Setup(r => r.FindById(1)).Returns(MyTestObject.GetThingies().First());
var thingie = MyPage.GetThingie(1);
The Mock object above uses the Setup method to return the same result for the call defined in the r => r.FindById(1) lambda. This is called an expecation. This allows you to test only the code in your method, without actually calling out to any dependent classes.
Once you've set up your test this way, you can use Moq's features to confirm that everything happened the way it was supposed to:
//did we get the instance we expected?
Assert.AreEqual(thingie.Id, MyTestObject.GetThingies().First().Id);
//was a method called?
repMock.Verify(r => r.FindById(1));
The Verify method allows you to test whether a method was called. Together, these facilities allow you focus your unit tests on a single method at a time.
Sounds like your units are too tightly coupled (at least from a quick view at your problem). What makes me curious is for instance the fact that your UserGroups takes a UserDemographics and your UserSetupEvent takes a list of UserGroup including a list of UserDemographics (again). Shouldn't the List<UserGroup> already include the ÙserDemographics passed in it's constructor or am I misunderstanding it?
Somehow it seems like a design problem of your class model which in turn makes it difficult to unit test. Difficult setup procedures are a code smell indicating high coupling :)
Bringing in interfaces is what I would prefer. Then you can mock the used classes and you don't have to duplicate code (which violates the Don't Repeat Yourself principle) and you don't have to use the original implementations in the unit tests for the Communications class.
You should use mock objects, basically your unit test should probably just generate some fake data that looks like real data instead of calling into the real code, this way you can isolate the test and have predictable test results.
You can make use of a tool called NBuilder to generate test data. It has a very good fluent interface and is very easy to use. If your tests need to build lists this works even better. You can read more about it here.
In bigger projects my unit tests usually require some "dummy" (sample) data to run with. Some default customers, users, etc. I was wondering how your setup looks like.
How do you organize/maintain this data?
How do you apply it to your unit tests (any automation tool)?
Do you actually require test data or do you think it's useless?
My current solution:
I differentiate between Master data and Sample data where the former will be available when the system goes into production (installed for the first time) and the latter are typical use cases I require for my tests to run (and to play during development).
I store all this in an Excel file (because it's so damn easy to maintain) where each worksheet contains a specific entity (e.g. users, customers, etc.) and is flagged either master or sample.
I have 2 test cases which I (miss)use to import the necessary data:
InitForDevelopment (Create Schema, Import Master data, Import Sample data)
InitForProduction (Create Schema, Import Master data)
I use the repository pattern and have a dummy repository that's instantiated by the unit tests in question, it provides a known set of data that encompasses a examples that are both within and out of range for various fields.
This means that I can test my code unchanged by supplying the instantiated repository from the test unit for testing or the production repository at runtime (via a dependency injection (Castle)).
I don't know of a good web reference for this but I learnt much from Steven Sanderson's Professional ASP.NET MVC 1.0 book published by Apress. The MVC approach naturally provides the separation of concern that's necessary to allow your testing to operate with fewer dependencies.
The basic elements are that you repository implements an interface for data access, that same interface is then implemented by a fake repository that you construct in your test project.
In my current project I have an interface thus:
namespace myProject.Abstract
{
public interface ISeriesRepository
{
IQueryable<Series> Series { get; }
}
}
This is implemented as both my live data repository (using Linq to SQL) and also a fake repository thus:
namespace myProject.Tests.Respository
{
class FakeRepository : ISeriesRepository
{
private static IQueryable<Series> fakeSeries = new List<Series> {
new Series { id = 1, name = "Series1", openingDate = new DateTime(2001,1,1) },
new Series { id = 2, name = "Series2", openingDate = new DateTime(2002,1,30),
...
new Series { id = 10, name = "Series10", openingDate = new DateTime(2001,5,5)
}.AsQueryable();
public IQueryable<Series> Series
{
get { return fakeSeries; }
}
}
}
Then the class that's consuming the data is instantiated passing the repository reference to the constructor:
namespace myProject
{
public class SeriesProcessor
{
private ISeriesRepository seriesRepository;
public void SeriesProcessor(ISeriesRepository seriesRepository)
{
this.seriesRepository = seriesRepository;
}
public IQueryable<Series> GetCurrentSeries()
{
return from s in seriesRepository.Series
where s.openingDate.Date <= DateTime.Now.Date
select s;
}
}
}
Then in my tests I can approach it thus:
namespace myProject.Tests
{
[TestClass]
public class SeriesTests
{
[TestMethod]
public void Meaningful_Test_Name()
{
// Arrange
SeriesProcessor processor = new SeriesProcessor(new FakeRepository());
// Act
IQueryable<Series> currentSeries = processor.GetCurrentSeries();
// Assert
Assert.AreEqual(currentSeries.Count(), 10);
}
}
}
Then look at CastleWindsor for the inversion of control approach for your live project to allow your production code to automatically instantiate your live repository through dependency injection. That should get you closer to where you need to be.
In our company we discuss exact these problem a bunch of time since weeks and month.
To follow the guideline of unit testing:
Each test must be atomar and don't allow relate to each other (No data sharing), that means, each tust must be have there own data at the beginning and clear the data at end.
Out product is so complex (5 years development, over 100 tables in a database), that is nearly impossible to maintain this in a acceptable way.
We tried out database scripts, which creates and deletes the data before / after the test (there are automatic methods which call it).
I would say you are on a good way with excel files.
Ideas from me to make it a little well:
If you have a database behind your software google for "NDBUnit". It's a framework to insert and delete data in databases for unit tests.
If you have no database maybe XML is a little more flexible on systems like excel.
Not directly answering the question but one way to limit the amount of tests that need to use dummy data is to use a mocking framework to create mocked objects that you can use to fake the behavior of any dependencies you have in a class.
I find that using mocked objects rather then a specific concrete implementation you can drastically reduce the amount of real data you need to use as mocks don't process the data you pass into them. They just perform exactly as you want them to.
I'm still sure you probably need dummy data in a lot of instances so apologies if you're already using or are aware of mocking frameworks.
Just to be clear, you need to differenciate between UNIT testing (test a module with no implied dependencies on other modules) and app testing (test parts of application).
For the former, you need a mocking framework (I'm only familiar with Perl ones, but i'm sure they exist in Java/C#). A sign of a good framework would be ability to take a running app, RECORD all the method calls/returns, and then mock the selected methods (e.g. the ones you are not testing in this specific unit test) using recorded data.
For good unit tests you MUST mock every external dependency - e.g., no calls to filesystem, no calls to DB or other data access layers unless that is what you are testing, etc...
For the latter, the same mocking framework is useful, plus ability to create test data sets (that can be reset for each test). The data to be loaded for the tests can reside in any offline storage that you can load from - BCP files for Sybase DB data, XML, whatever tickles your fancy. We use both BCP and XML.
Please note that this sort of "load test data into DB" testing is SIGNIFICANTLY easier if your overall company framework allows - or rather enforces - a "What is the real DB table name for this table alias" API. That way, you can cause your application to look at cloned "test" DB tables instead of real ones during testing - on top of such table aliasing API's main purpose of enabling one to move DB tables from one database to another.