Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I am rewriting a C# .NET project and currently planning how I am going to do the testing.
After everything I have read I will install the XUnit framework (for the first time -- I am more experienced with MSTest). Now I am wondering whether I should combine it with FluentAssertions (which I also never used before) or rather write pure XUnit tests.
At a first glance, FluentAssertions sounds nerdy and stylish, but I'm not sure if it really will lead me to write best-readable code and how well it will scale over complex tests.
Hence I am searching for your experience and arguments. [When] (do | would) you use FluentAssertions? I'm curious.
Fluent is mostly about readability and convenience.
If you are going to write more than a handful of unit test I'd suggest using it.
I recently had the case where I was mapping object 'a' and 'b' onto object 'c' and I wanted to verify the mapper with a unit test.
So, I created an 'expectedObject' which contained all properties that object 'c' should contain once it was mapped.
As I had not written a comparer, nor did I have the need for one, it would have been very cumbersome to compare object 'c' with 'expectedObject' to assert they contain the same data. The object in question contained many properties which in turn had many properties.
But with Fluent I could simply write
c.Should().BeEquivalentTo(expectedObject);
This is much easier to read than a litany of Assert.AreEqual() and in this case, more importantly, much faster to write as well.
Fluent Assertions is a Nuget package I've been using consistently on
my projects for about 6 years. It's extremely simple to pick-up and
start using. Most people can get to grips with it within 5-10 minutes
and it will make reading your unit tests a little bit easier. Fluent
Assertions is free so there really isn't a party foul for trying it
out. I think I've introduced Fluent Assertions to over 10 teams now
and so far no one's complained. The biggest reason why most teams
don't use it is just lack of exposure to it. Using a standard approach
a unit test may look similar to this:
[TestMethod]
public void Example_test()
{
var actual = PerformLogic();
var expected = true;
Assert.AreEqual(expected, actual);
}
There's nothing wrong with this test but you need to spend a second or
two to understand what's going on. Instead, using FLuent Assertations
you can write the same test like this:
[TestMethod]
public void Example_test()
{
var result = PerformLogic();
result.Should().BeTrue();
}
Hopefully, you can see that the second example takes a lot less time
to read, as it reads like a sentence rather than an Assert statement.
Fundamentally, this is all Fluent Assertions is, a number of extension
methods that make it easier to read your unit tests compared to Assert
statements. I'm hoping you can understand why it's so easy to pick up.
All you need to do is get the outcome of your test in a result
variable, use the Should() exertion and then use Fluent Assertions
other extensions to test for your use case. Simple!
http://www.jondjones.com/c-sharp-bootcamp/tdd/fluent-assertions/what-is-fluent-assertions-and-should-i-be-using-it
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I want to keep myself as short as possible:
First: I read related posts, but they didn't help a lot.
See: What is a quality real world example of TDD in action?
Or: How do you do TDD in a non-trivial application?
Or: TDD in ASP.NET MVC: where to start?
Background:
I'm not a total TDD beginner, I know the principles
I read Rob C Martin and MC Feathers and the like
TDD works fine for me in Bowling and TicTacToe Games
But I'm kind of lost when I want to to TDD in my workplace. It's not about Mocking, I kinda do know how to mock the dependecies.
It's more:
WHEN do I code WHAT?
WHERE do I begin?
And: WHEN and HOW do I implement the "database" or "file system" code. It's cool to mock it but at integration test stage I need id as real code.
Imagine this (example):
Write a program which reads a list of all customers from a database.
Related to the customer IDs it has to search data from a csv/Excel file.
Then the business logic does magic to it.
At the end the results are written to the database (different table).
I never found a TDD example for an application like that.
EDIT:
How would you as a programmer would implement this example in TDD style?
PS: I'm not talking about db-unit testing or gui unit testing.
You could start without a database entirely. Just write an interface with the most basic method to retrieve the customers
public interface ICustomerHandler
{
List<Customer> GetCustomers(int customerId);
}
Then, using your mocking framework, mock that interface while writing a test for a method that will use and refer to an implementation of the interface. Create new classes along the way as needed (Customer, for instance), this makes you think about which properties are required.
[TestMethod()]
public void CreateCustomerRelationsTest()
{
var manager = new CustomerManager(MockRepository.GenerateMock<ICustomerHandler>());
var result = manager.CreateCustomerRelations();
Assert.AreEqual(1, result.HappyCustomers);
Assert.AreEqual(0, result.UnhappyCustomers);
}
Writing this bogus test tells you what classes are needed, like a CustomerManager class which has a method CreateCustomerRelations and two properties. The method should refer to the GetCustomer method in the interface, using the instance of the mock that was being injected in the class constructor.
Do just enough to make the project build and let you run the test for the first time, which will fail as there's no logic in the method being tested. However, you are off on a great start with letting the test dictate which input your method should take, and what output it should receive and assert. Defining the test conditions first helps you in creating a good design. Soon you will have enough code written to ensure the test confirms your method is well designed and behaves the way you want it to.
Think about what behaviour you are testing, and use this to drive a single higher level test. Then as you implement this functionality use TDD to drive out the behaviour you want in the classes you need to implement this functionality.
In your example I'd start with a simple no-op situation. (I'll write it in BDD langauge but you could similarly implement this in code)
Given there are no customers in the database
When I read customers and process the related data from the csv file
Then no data should be written to the database
This sort of test will allow you to get some of the basic functionality and interfaces in place without having to implement anything in your mocks (apart from maybe checking that you are not calling the code to do the final write)
Then I'd move on to a slightly wider example
Given there are some customers in the database
But none of these customers are in the CSV file
When I read customers and process the related data from the csv file
Then no data should be written to the database
And I would keep adding incrementally and adjusting the classes needed to do this, initially probably using mocks but eventually working up to using the real database interactions.
I'd be wary of writing a test for every class though. This can make your tests brittle and they can need changing every time you make a small refactoring like change. focus on the behaviour, not the implementation
your flow should be something like this:
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I need help on the unit testing. I have never done a code unit test through Visual studio and can't seem to find any videos or reading material on how to create one using just Visual Studio 2010 Professional. Everything I find is in reference in one way or another to testing center and I don't have it, nor can I get a full version of it (broke student).
When I try to d a unit test i see all this extra stuff generated and I don't understand any of it, I can tell that some of it has to be updated after generated but don't know to what for the given situation. Is there any free resource that i can use that will tell me how to make a unit test? I would even be happy learning how to make them from scratch.
First thing to understand is that a unit test in C# is created and run with a reference to a library which supports unit testing. Microsoft provide MSTest, and there are alternatives. A very popular one is NUnit.
Most test frameworks work with attributes. An attribute in this context is the bit which decorates the code using square brackets:
[ThisIsAnAttribute]
public void ThisMethodIsDecoratedWithAnAttribute(){}
Attributes provide additional information which can be retrieved at runtime using reflection. Reflection allows you to inspect the structure of the code and types in that code.
Most testing frameworks use two main concepts. 1) A "Fixture" which is a snazzy way of saying "A set of tests" and 2) a "Test". A fixture is represented by a class with a fixture attribute, tests are represented by methods with a test attribute.
The tests are obviously testing something, so you need two things here as well. 1) The thing you are testing and 2) A way to "Assert" that the test has passed or failed.
So the thing you are testing is just code. Generally this should be a small bit of code which is independent and isolated. Tests should never really be longer than 20 lines max (although remember to never say never).
To Assert that a test has passed or failed you use a class from the testing framework which feeds back test success. This is usually called Assert and is static.
So to assert that a value at the end of a test is a certain value, you might say:
Assert.IsEqualTo(5, myResultVariable);
The Assert class has a lot of methods. These will test for various conditions. Is it null, is it equal to, is it not equal to, that sort of thing.
To run the test, you use the framework's runner. This will take the code and report back the results.
So here's a simple MSTest unit test.
[TestClass]
public class MathTestsForSimpleOperators
{
[TestMethod]
public void TestThatAdding3To8Equals11()
{
Assert.AreEqual(11, 3 + 8);
}
}
You can see the fixture is called MathTestsForSimpleOperators (Be descriptive) and it has one test called TestThatAdding3To8Equals11. It's not a useful test, but it has all the parts you need.
Here is a link to NUnit's getting started page. MSTest is just as simple to use. It will take you through step by step installing, writing and running a test.
NUnit getting started page
http://code.tutsplus.com/articles/the-beginners-guide-to-unit-testing-what-is-unit-testing--wp-25728
Maybe?
Another suggestion, try doing thins your self, and when you are stuck just don't hestitate to ask this community. But don't just ask for whole help to your projects.
Poom
I was looking over a fairly modern project created with a big emphasis on unit testing. In accordance with old adage "every problem in object oriented programming can be solved by introducing new layer of indirection" this project was sporting multiple layers of indirection. The side-effect was that fair amount of code looked like following:
public bool IsOverdraft)
{
balanceProvider.IsOverdraft();
}
Now, because of the empahsis on unit testing and maintaining high code coverage, every piece of code had unit tests written against it.Therefore this little method would have three unit tests present. Those would check:
If balanceProvider.IsOverdraft() returns true then IsOverdraft should return true
If balanceProvider.IsOverdraft() returns false then IsOverdraft should return false
If balanceProvider throws an exception then IsOverdraft should rethrow the same exception
To make things worse, the mocking framework used (NMock2) accepted method names as string literals, as follows:
NMock2.Expect.Once.On(mockBalanceProvider)
.Method("IsOverdraft")
.Will(NMock2.Return.Value(false));
That obviously made "red, green, refactor" rule into "red, green, refactor, rename in test, rename in test, rename in test". Using differnt mocking framework like Moq, would help with refactoring, but it would require a sweep trough all existing unit tests.
What is the ideal way to handle this situation?
A) Keep smaller levels of layers, so that those forwarding calls do not happen anymore.
B) Do not test those forwarding methods, as they do not contain business logic. For purposes of coverage marked them all with ExcludeFromCodeCoverage attribute.
C) Test only if proper method is invoked, without checking return values, exceptions, etc.
D) Suck it up, and keep writing those tests ;)
Either B or C. That's the problem with such general requirements ("every method must have unit test, every line of code needs to be covered") - sometimes, benefit they provide is not worth the cost. If it's something you came up with, I suggest rethinking this approach. The "we must have 95% code coverage" might be appealing on paper but in practice it quickly spawns problems like the one you have.
Also, the code you're testing is something I'd call trivial code. Having 3 tests for it is most likely overkill. For that single line of code, you'll have to maintain like 40 more. Unless your software is mission critical (which might explain high-coverage requirement), I'd skip those tests.
One of the (IMHO) most pragmatic advices on this topic was provided by Kent Beck some time ago on this very site and I expanded a bit on those thoughts with in my blog posts - What should you test?
Honestly, I think we should write tests only to document our code in an helpful manner. We should not write tests just for the sake of code coverage. (Code coverage is just a great tool to figure out what it is NOT covered so that we can figure out if we did forget important unit tests cases or if we actually have some dead code somewhere).
If I write a test, but the test ends up just being a "duplication" of the implementation or worse...if it's harder to understand the test than the actual implementation....then really such a test should not exists. Nobody is interested in reading such tests. Tests should not contain implementation details. Test are about "what" should happen not "how" it will be done. Since you've tagged your question with "TDD", I would add that TDD is a design practice. So if I already know 100% sure in advance what will be the design of what i'm going to implement, then there is no point for me to use TDD and write unit tests (But I will always have in all cases a high level acceptance test that will cover that code). That will happen often when the thing to design is really simple, like in your example. TDD is not about testing and code coverage, but really about helping us to design our code and document our code. There is no point to use a design tool or a documentation tool for designing/documenting simple/obvious things.
In your example, it's far easier to understand what's going on by reading directly the implementation than the test. The test doesn't add any value in term of documentation. So I'd happily erase it.
On top of that such tests are horridly brittle, because they are tightly coupled to the implementation. That's a nightmare on the long term when you need to refactor stuff since any time you will want to change the implementation they will break.
What I'd suggest to do, is to not write such tests but instead have higher level component tests or fast integration tests/acceptance tests that would exercise these layers without knowing anything at all about the inner working.
I think one of the most important things to keep in mind with unit tests is that it doesn't necessarily matter how the code is implemented today, but rather what happens when the tested code, direct or indirect, is modified in the future.
If you ignore those methods today and they are critical to your application's operation, then someone decides to implement a new balanceProvider at some point down the road or decides that the redirection no longer makes sense, you will most likely have a failure point.
So, if this were my application, I would first look to reduce the forward-only calls to a bare minimum (reducing the code complexity), then introduce a mocking framework that does not rely on string values for method names.
A couple of things to add to the discussion here.
Switch to a better mocking framework immediately and incrementally. We switched from RhinoMock to Moq about 3 years ago. All new tests used Moq, and often when we change a test class we switch it over. But areas of the code that haven't changed much or have huge test casses are still using RhinoMock and that is OK. The code we work with from day to day is much better as a result of making the switch. All test changes can happen in this incremental way.
You are writing too many tests. An important thing to keep in mind in TDD is that you should only write code to satisfy a red test, and you should only write a test to specify some unwritten code. So in your example, three tests is overkill, because at most two are needed to force you to write all of that production code. The exception test does not make you write any new code, so there is no need to write it. I would probably only write this test:
[Test]
public void IsOverdraftDelegatesToBalanceProvider()
{
var result = RandomBool();
providerMock.Setup(p=>p.IsOverdraft()).Returns(result);
Assert.That(myObject.IsOverDraft(), Is.EqualTo(result);
}
Don't create useless layers of indirection. Mostly, unit tests will tell you if you need indirection. Most indirection needs can be solved by the dependency inversion principle, or "couple to abstractions, not concretions". Some layers are needed for other reasons (I make WCF ServiceContract implementations a thin pass through layer. I also don't test that pass through). If you see a useless layer of indirection, 1) make sure it really is useless, then 2) delete it. Code clutter has a huge cost over time. Resharper makes this ridiculously easy and safe.
Also, for meaningful delegation or delegation scenarios you can't get rid of but need to test, something like this makes it a lot easier.
I'd say D) Suck it up, and keep writing those tests ;) and try to see if you can replace NMock with MOQ.
It might not seem necessary and even though it's just delegation now, but the tests are testing that it's calling the right method with right parameters, and the method itself is not doing anything funky before returning values. So it's a good idea to cover them in tests. But to make it easier use MOQ or similiar framework that'll make it so much easier to refactor.
I'm writing a series of automatic tests in C# using NUnit and Selenium.
Edit: I am testing an entire website, to begin I wrote three classes for the three types of members that use the website, these classes contain methods which use selenium to perform various actions by these members. These classes are then created and their methods called by my test classes with the appropriate inputs.
My question is:
Does it matter how large my test class becomes? (i.e. thousands of tests?)
When is it time to refactor my functionality classes? (25 or 50 methods, 1000 lines of code, etc)
I've been trying to read all I can about test design so if you have any good resources I would appreciate links.
Does it matter how large my test class becomes? (i.e. thousands of tests?)
Yes it does. Tests need to be maintained in the long term, and a huge test class is difficult to understand and maintain.
When is it time to refactor my functionality classes? (25 or 50 methods, 1000 lines of code, etc)
When you start to feel it is awkward to find a specific test case, or to browse through the tests related to a specific scenario. I don't think there is a hard limit here, just as there is no hard limit for the size of production classes or the number of methods. I personally put the limits higher for test code than for production code, because test code tends to be simpler, so the threshold where it starts to become difficult to understand is higher. But in general, a 1000 line test class with 50 test methods starts to feel too big for me.
I just recently had to work with such a test class, and I ended up partitioning it, so now I have several test classes each testing one particular method / use case of a specific class*. Some of the old tests I managed to convert into parameterized tests, and all new tests are written as paramterized tests. I found that parameterized tests make it much easier to look through the big picture, and keep all test cases in mind at once. I did this using JUnit on a Java project, but I see NUnit 2.5 now offers parameterized tests too - you should check it out.
*You may rightly ask shouldn't the class under test be refactored if we need so many test cases to cover it - yes it should, eventually. It is the largest class in our legacy app, with way too much stuff in it. But first we need to have the test cases in place :-) Btw this may apply to your class too - if you need so many test cases to cover it, it might be that the class under test is just trying to do too much, and you would be better off extracting some of its functionality into a separate class, with its own unit tests.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
We have recently adopted the specification patterns for validating domain objects and now want to introduce unit testing of our domain objects to improve code quality.
One problem I have found is how best to unit test the validate functionality shown in the example below. The specification hits the database so I want to be able to mock it but as it is instantiated in-line I can't do this. I could work off interfaces but this increases the complexity of the code and as we may have a lot of specifications we will ultimately have a lot of interfaces (remember we are introducing unit testing and don't want to give anyone an excuse to shoot it down).
Given this scenario how would we best solve the problem of unit testing the specification pattern in our domain objects?
...
public void Validate()
{
if(DuplicateUsername())
{ throw new ValidationException(); }
}
public bool DuplicateUsername()
{
var spec = new DuplicateUsernameSpecification();
return spec.IsSatisfiedBy(this);
}
A more gentle introduction of Seams into the application could be achieved by making core methods virtual. This means that you would be able to use the Extract and Override technique for unit testing.
In greenfield development I find this technique suboptimal because there are better alternatives available, but it's a good way to retrofit testability to already existing code.
As an example, you write that your Specification hits the database. Within that implementaiton, you could extract that part of the specification to a Factory Method that you can then override in your unit tests.
In general, the book Working Effectively with Legacy Code provides much valuable guidance on how to make code testable.
If you dont want to do constructor injection of a factory, and make the specs mockable... Have you considered TypeMock? It is very powerful for dealing with this sort of thing. You can tell it to mock the next object of type X that is to be created, and it can mock anything, no virtuals etc required.
You could extract getDuplicateUsernameSpecification() into a public method of its own, then subclass and override that for your tests.
If you use IoC then you can resolve the DuplicateUsernameSpecification and in test mockup the last one
Edit: The idea is to replace direct constructor call with factory method. Something like this:
public bool DuplicateUsername()
{
var spec = MyIocContainer.Resolve<DuplicateUsernameSpecification>();
return spec.IsSatisfiedBy(this);
}