Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
We have recently adopted the specification patterns for validating domain objects and now want to introduce unit testing of our domain objects to improve code quality.
One problem I have found is how best to unit test the validate functionality shown in the example below. The specification hits the database so I want to be able to mock it but as it is instantiated in-line I can't do this. I could work off interfaces but this increases the complexity of the code and as we may have a lot of specifications we will ultimately have a lot of interfaces (remember we are introducing unit testing and don't want to give anyone an excuse to shoot it down).
Given this scenario how would we best solve the problem of unit testing the specification pattern in our domain objects?
...
public void Validate()
{
if(DuplicateUsername())
{ throw new ValidationException(); }
}
public bool DuplicateUsername()
{
var spec = new DuplicateUsernameSpecification();
return spec.IsSatisfiedBy(this);
}
A more gentle introduction of Seams into the application could be achieved by making core methods virtual. This means that you would be able to use the Extract and Override technique for unit testing.
In greenfield development I find this technique suboptimal because there are better alternatives available, but it's a good way to retrofit testability to already existing code.
As an example, you write that your Specification hits the database. Within that implementaiton, you could extract that part of the specification to a Factory Method that you can then override in your unit tests.
In general, the book Working Effectively with Legacy Code provides much valuable guidance on how to make code testable.
If you dont want to do constructor injection of a factory, and make the specs mockable... Have you considered TypeMock? It is very powerful for dealing with this sort of thing. You can tell it to mock the next object of type X that is to be created, and it can mock anything, no virtuals etc required.
You could extract getDuplicateUsernameSpecification() into a public method of its own, then subclass and override that for your tests.
If you use IoC then you can resolve the DuplicateUsernameSpecification and in test mockup the last one
Edit: The idea is to replace direct constructor call with factory method. Something like this:
public bool DuplicateUsername()
{
var spec = MyIocContainer.Resolve<DuplicateUsernameSpecification>();
return spec.IsSatisfiedBy(this);
}
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I am rewriting a C# .NET project and currently planning how I am going to do the testing.
After everything I have read I will install the XUnit framework (for the first time -- I am more experienced with MSTest). Now I am wondering whether I should combine it with FluentAssertions (which I also never used before) or rather write pure XUnit tests.
At a first glance, FluentAssertions sounds nerdy and stylish, but I'm not sure if it really will lead me to write best-readable code and how well it will scale over complex tests.
Hence I am searching for your experience and arguments. [When] (do | would) you use FluentAssertions? I'm curious.
Fluent is mostly about readability and convenience.
If you are going to write more than a handful of unit test I'd suggest using it.
I recently had the case where I was mapping object 'a' and 'b' onto object 'c' and I wanted to verify the mapper with a unit test.
So, I created an 'expectedObject' which contained all properties that object 'c' should contain once it was mapped.
As I had not written a comparer, nor did I have the need for one, it would have been very cumbersome to compare object 'c' with 'expectedObject' to assert they contain the same data. The object in question contained many properties which in turn had many properties.
But with Fluent I could simply write
c.Should().BeEquivalentTo(expectedObject);
This is much easier to read than a litany of Assert.AreEqual() and in this case, more importantly, much faster to write as well.
Fluent Assertions is a Nuget package I've been using consistently on
my projects for about 6 years. It's extremely simple to pick-up and
start using. Most people can get to grips with it within 5-10 minutes
and it will make reading your unit tests a little bit easier. Fluent
Assertions is free so there really isn't a party foul for trying it
out. I think I've introduced Fluent Assertions to over 10 teams now
and so far no one's complained. The biggest reason why most teams
don't use it is just lack of exposure to it. Using a standard approach
a unit test may look similar to this:
[TestMethod]
public void Example_test()
{
var actual = PerformLogic();
var expected = true;
Assert.AreEqual(expected, actual);
}
There's nothing wrong with this test but you need to spend a second or
two to understand what's going on. Instead, using FLuent Assertations
you can write the same test like this:
[TestMethod]
public void Example_test()
{
var result = PerformLogic();
result.Should().BeTrue();
}
Hopefully, you can see that the second example takes a lot less time
to read, as it reads like a sentence rather than an Assert statement.
Fundamentally, this is all Fluent Assertions is, a number of extension
methods that make it easier to read your unit tests compared to Assert
statements. I'm hoping you can understand why it's so easy to pick up.
All you need to do is get the outcome of your test in a result
variable, use the Should() exertion and then use Fluent Assertions
other extensions to test for your use case. Simple!
http://www.jondjones.com/c-sharp-bootcamp/tdd/fluent-assertions/what-is-fluent-assertions-and-should-i-be-using-it
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I've seen a lot of different coding patterns over the last several years, and I was struck by vast differences between different shops and programmers. At my previous employer, nearly every single class had a defined interface, even if only a single class implemented that interface, and the interfaces were used as parameters instead of the classes themselves.
At my current employer, interfaces are practically unheard of, and I don't think I've ever seen a custom interface ever defined. As such, classes are pretty much exclusively passed around.
I understand that interfaces are a contract that defines what members and functions a class will implement, but are there any real reasons to define interfaces for some/most classes that will never share similarities to other classes?
For example, most of our operations are simple CRUD actions. While we handle reporting and other tasks, nearly every operation is either some sort of insert, update, delete, or select. Our data models tend to be pretty similar to our database structure at their base level. As we move higher through the application layers, we may combine or alter certain objects to contain related properties, but everything is pretty linear.
I'm just having a hard time seeing why interfaces would be such a good thing to implement in our situation, whereas my last company heavily relied upon them.
The primary benefit to all classes implementing an interface and then passing them around is that it greatly increases the ease of mocking them for unit tests.
If you always pass concrete classes around, the mocks have to derive from them. If they don't have virtual members, the mocks cannot override any behavior, and even if there are virtual members you may get side-effect code from the base class that you don't want in that environment.
None of these problems exist with interfaces, clean mocks are very easy (especially with a framework like NSubstitute). The interfaces also allow for implementing various patterns like Strategy, and help support the Open-Closed Principle (among others).
Granted, an interface for every class can seem to be a bit overkill, but at least interfaces around every process-external facing class is an excellent practice.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Hi I'd like to put security attributes on regular instance methods. Here is an example:
[EnsureUserIsAdmin] //I want something like this
public IEnumerable<NameItem> GetNameItems(int Id)
{
return _nameDataController.GetNameItems(Id);
}
This method is in my business logic. Any other code that uses this method will have to go through a check to see if the user is an admin. If it is possible to do this how would I unit test it?
Assuming that what you are asking is whether you can arbitrarily restrict access to methods in an automated fashion using an attribute, then if your application's security principal object is a Windows Principal (eg. you are using Active Directory or Windows Authentication), yes you can, using the PrincipalPermission Attribute.
[PrincipalPermission(SecurityAction.Demand, Role = "MyAdminRole")]
public void TestMethod()
{
}
Filter attribute unlike normal Attribute, resides on Controller Actions and are part of the ASP.NET controller execution process and are gathered and executed by the Routing engine.
The solution is not out of the box, and will required fair amount of non trivial complexity.
In case of non-action method you have the option of creating your own controlled environment, which will be limited and a little forced.
I would against it and use normal method preconditions or calling another validation method inside your target method to test for "security" or other validations, but certainly not attributes.
If you still want using attributes, then the following can be a solution.
You will need to make some kind of CustomSecurityManager which will execute the targeted method you want, he will have the responsibilities of:
Finding the target method
Collecting specific custom attributes and running them, throw exception or return false if there are issues.
Run the method if the attributes are valid.
Note: Only the CustomSecurityManager is calling the GetNameItems.
Unit testing can be achieved by Injecting a ICustomSecurityManager which will be mocked to return expected results.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I want to keep myself as short as possible:
First: I read related posts, but they didn't help a lot.
See: What is a quality real world example of TDD in action?
Or: How do you do TDD in a non-trivial application?
Or: TDD in ASP.NET MVC: where to start?
Background:
I'm not a total TDD beginner, I know the principles
I read Rob C Martin and MC Feathers and the like
TDD works fine for me in Bowling and TicTacToe Games
But I'm kind of lost when I want to to TDD in my workplace. It's not about Mocking, I kinda do know how to mock the dependecies.
It's more:
WHEN do I code WHAT?
WHERE do I begin?
And: WHEN and HOW do I implement the "database" or "file system" code. It's cool to mock it but at integration test stage I need id as real code.
Imagine this (example):
Write a program which reads a list of all customers from a database.
Related to the customer IDs it has to search data from a csv/Excel file.
Then the business logic does magic to it.
At the end the results are written to the database (different table).
I never found a TDD example for an application like that.
EDIT:
How would you as a programmer would implement this example in TDD style?
PS: I'm not talking about db-unit testing or gui unit testing.
You could start without a database entirely. Just write an interface with the most basic method to retrieve the customers
public interface ICustomerHandler
{
List<Customer> GetCustomers(int customerId);
}
Then, using your mocking framework, mock that interface while writing a test for a method that will use and refer to an implementation of the interface. Create new classes along the way as needed (Customer, for instance), this makes you think about which properties are required.
[TestMethod()]
public void CreateCustomerRelationsTest()
{
var manager = new CustomerManager(MockRepository.GenerateMock<ICustomerHandler>());
var result = manager.CreateCustomerRelations();
Assert.AreEqual(1, result.HappyCustomers);
Assert.AreEqual(0, result.UnhappyCustomers);
}
Writing this bogus test tells you what classes are needed, like a CustomerManager class which has a method CreateCustomerRelations and two properties. The method should refer to the GetCustomer method in the interface, using the instance of the mock that was being injected in the class constructor.
Do just enough to make the project build and let you run the test for the first time, which will fail as there's no logic in the method being tested. However, you are off on a great start with letting the test dictate which input your method should take, and what output it should receive and assert. Defining the test conditions first helps you in creating a good design. Soon you will have enough code written to ensure the test confirms your method is well designed and behaves the way you want it to.
Think about what behaviour you are testing, and use this to drive a single higher level test. Then as you implement this functionality use TDD to drive out the behaviour you want in the classes you need to implement this functionality.
In your example I'd start with a simple no-op situation. (I'll write it in BDD langauge but you could similarly implement this in code)
Given there are no customers in the database
When I read customers and process the related data from the csv file
Then no data should be written to the database
This sort of test will allow you to get some of the basic functionality and interfaces in place without having to implement anything in your mocks (apart from maybe checking that you are not calling the code to do the final write)
Then I'd move on to a slightly wider example
Given there are some customers in the database
But none of these customers are in the CSV file
When I read customers and process the related data from the csv file
Then no data should be written to the database
And I would keep adding incrementally and adjusting the classes needed to do this, initially probably using mocks but eventually working up to using the real database interactions.
I'd be wary of writing a test for every class though. This can make your tests brittle and they can need changing every time you make a small refactoring like change. focus on the behaviour, not the implementation
your flow should be something like this:
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I have a certain class Thing, and an interface that it implements, named IThing. Everybody who uses IThing can presume that it's really a Thing, since it's the only class which implements this interface, but at the same time he understands that he can only access a certain subset of the public members of Thing, and there's a pretty good design reason for this — basically, IThing is a read-only version of Thing (it's a little bit more complex than that, but let's pretend it's just read-only/write distinction for the sake of the question).
Is it a good convention though? As an alternative, I could name this interface IThingReadOnly or name the class ThingWritable, or something like this, but it seems that these names would be bulky and less readable in a big codebase.
I also use extension methods extensively for both interface and class, so I have ThingExtensions and IThingExtensions as well. It's very useful, because everyone who reads the code of this extensions can operate from an assumption that it only uses public members of Thing and IThing, respectively. However, having both ThingExtensions and IThingExtensions files sitting alongside in a project seem a little bit off for some reason.
So, which one is a better option — to keep Thing and IThing alongside, or to rename one of them?
Update about close vote:
This is an opinion-based question, because it's question about best practice — but it's not a primarily opinion-based question (please mind the distinction). SO has a lot of great question and answers about best practices, so I think that either there's a difference between this question and other best-practice question that I don't see, or this question has just the same right to exist as any other best-practice question.
First off I´d suggest using extension-methods just for types you do not have control on, e.g. the .NET-types like IEnumerable. However you may consider create two different interfaces, one base interface for reading (let´s call it IThingRead) and another one that represents your actual Thing-type (IThingWrite) with some write-modifiers.
Anyway creating an interface for every class is good practice and eases testing by mocking up some uof your types.
If you're sure that you will not need another implementation of the interface and you don't need to mock the interface for test purpose, you can simply remove the interface and use the concrete class.
Otherwise keep using IThing and Thing (this is the normal naming convention).
I would create only IThingExtensions though