Don't run test from TestClass if condition is not met - c#

I am using Assembly Microsoft.VisualStudio.TestPlatform.TestFramework, Version=14.0.0.0
I have a [TestClass] which includes only tests concerned with publishing and subscribing to messages. However it's possible that the message broker client that we use is not available on the current environment or for some other reason, which is controller with a falg in the appsettings:
var messageBrokerConfig = configuration.GetSection("MessageBroker").Get<BrokerConfig>();
if (messageBrokerConfig.Enabled)...
this is the way I know if the message broker is available and I can execute the tests or not. Now, a simple solution, that immediatly comes to mind is just to have some (maybe private method) which would be called at the beginning of each test like:
private bool ShouldExecute()
{
return configuration.GetSection("MessageBroker").GetSection("Enabled") == false.ToString()
}
But then I should put this at the beginning of every test which is pretty far from DRY.
In a perfect scenarion I would be able to able/disable the [TestClass] attribute or have any other attribute or something that will prevent all test from executing. At worst, maybe when I invoke the
[ClassInitialize] method. But I don't know the framework very well so I'm not sure what are my options here.
I'm pretty sure there should be a solution which will allows me to put a code at one place and decide for the whole class but I just don't know what it is.

Related

How do I take a screenshot when there's an AssertionException? (Xamarin.UITest, C#)

I have built a Xamarin.UITest suite and I'm testing locally on a real Android device. I notice that one or two tests fail, but not necessarily the same tests. Whenever I try to re-run such tests to see why they're failing, they pass! Therefore I need to add functionality to see what's on the screen when a test fails. I don't seem to be able to find a guide specifically for this - just bits and pieces...
I've added EnableLocalScreenshots() to my StartApp() call, but I'm not sure of the next steps. So I have some questions:
Do I need to specify the location of where the screenshots are saved, or is this done automatically?
Do I need to explicitly write code to take a screenshot when there's an AssertionException, or is this what the screenshot functionality does by default?
If I need to write code to take a screenshot when there's an AssertionException, please can you point me to an example of this code so I can add it to my suite?
Thank you
EDIT: Re: Take screenshot on test failure + exceptions
I have tried the following in my Hooks.cs file:
[OneTimeTearDown]
public void OneTimeTearDown()
{
if (TestContext.CurrentContext.Result.Outcome == ResultState.Error || TestContext.CurrentContext.Result.Outcome == ResultState.Failure)
{
App.Screenshot(TestContext.CurrentContext.Test.Name);
}
}
Upon debugging this method, I find that it is never called. Not even if I use TearDown and AfterScenario attributes. So great - that Intellisense likes my code, bad - because it never gets called. It shouldn't be this hard!
I am using Specflow in this suite, could that be anything to do with why I'm getting this issue? This is why I can't implement the solution on the above thread as follows because Specflow manages the NUnit tests...
UITest(() =>
{
// Do your test steps here, including asserts etc.
// Any exceptions will be caught by the base class
// and screenshots will be taken
});
Ok so for me this was the answer.
Added [Binding] attribute above class (dumb error on my part)...
[Binding]
class TestInitialise : BasePage
Added [AfterScenario()] above my screenshot method, and made it a static method...
[AfterScenario()]
public static void TakeScreenshot()
Specified where to save the screenshot...
App.Screenshot(TestContext.CurrentContext.Test.Name).CopyTo(#"C:\Users\me\Desktop\" + TestContext.CurrentContext.Test.Name + ".png");
So I'm guessing App.Screenshot(string title) simply takes a screenshot and holds it in memory, but you actually need to save it somewhere explicitly to get it, rather than assuming it just saves to a default location.
That's that then!

NUnit using a separate test as a 'setup' for a subsequent test

I'm trying to figure out how to have tests fail as 'Inconclusive' if some required setup that also exists as a test fails.
This is an overly simplified example of what I'm trying to do, but hopefully it illustrates the point:
[Test] public void TestSetter() {
Assert.That(() => myInstance.Property = "test", Throws.Nothing);
}
[Test] public void TestGetter() {
try { TestSetter(); } catch (AssertionException) {
Assert.Inconclusive("Unable to test, prereq failed");
}
Assert.That(myInstance.Property, Is.EqualTo("test"));
}
This doesn't seem to work though, if I force the TestSetter test to fail, both still show the 'Failed' result, instead of the TestGetter resulting in 'Inconclusive'. I stepped through the code, it's definitely hitting the Assert.Inconclusive call, but it seems the earlier AssertionException is still getting precedence.
Is there any way of getting this to correctly report 'Inconclusive'?
Using C#7, with NUnit 3.6.0
Try upgrading to NUnit Framework v3.6.1, and see if that runs as you expect.
There was a change in v3.6.0 relating to catching AssertionExceptions, which was later reverted.
See https://github.com/nunit/nunit/issues/2043 for details. To summarise - you should be aware that catching AssertionException's isn't strictly a supported interface to write tests to, and it may be worthwhile refactoring, to test for setup success in a different way.

How to create a restful web service with TDD approach?

I've been given a task of creating a restful web service with JSON formating using WCF with the below methods using TDD approach which should store the Product as a text file on disk:
CreateProduct(Product product)
GetAProduct(int productId)
URI Templates:
POST to /MyService/Product
GET to /MyService/Product/{productId}
Creating the service and its web methods are the easy part but
How would you approach this task with TDD? You should create a test before creating the SUT codes.
The rules of unit tests say they should also be independent and repeatable.
I have a number of confusions and issues as below:
1) Should I write my unit tests against the actual service implementation by adding a reference to it or against the urls of the service (in which case I'd have to host and run the service)? Or both?
2)
I was thinking one approach could be just creating one test method inside which I create a product, call the CreateProduct() method, then calling the GetAProduct() method and asserting that the product which was sent is the one that I have received. On TearDown() event I just remove the product which was created.
But the issues I have with the above is that
It tests more than one feature so it's not really a unit test.
It doesn't check whether the data was stored on file correctly
Is it TDD?
If I create a separate unit test for each web method then for example for calling GetAProduct() web method, I'd have to put some test data stored physically on the server since it can't rely on the CreateProduct() unit tests. They should be able to run independently.
Please advice.
Thanks,
I'd suggest not worrying about the web service end points and focus on behavior of the system. For the sake of this discussion I'll drop all technical jargon and talk about what I see as the core business problem you're trying to solve: Creating a Product Catalog.
In order to do so, start by thinking through what a product catalog does, not the technical details about how to do it. Use that as your starting points for your tests.
public class ProductCatalogTest
{
[Test]
public void allowsNewProductsToBeAdded() {}
[Test]
public void allowsUpdatesToExistingProducts() {}
[Test]
public void allowsFindingSpecificProductsUsingSku () {}
}
I won't go into detail about how to implement the tests and production code here, but this is a starting point. Once you've got the ProductCatalog production class worked out, you can turn your attention to the technical details like making a web service and marshaling your JSON.
I'm not a .NET guy, so this will be largely pseudocode, but it probably winds up looking something like this.
public class ProductCatalogServiceTest
{
[Test]
public void acceptsSkuAsParameterOnGetRequest()
{
var mockCatalog = new MockProductCatalog(); // Hand rolled mock here.
var catalogService = new ProductCatalogService(mockCatalog);
catalogService.find("some-sku-from-url")
mockCatalog.assertFindWasCalledWith("some-sku-from-url");
}
[Test]
public void returnsJsonFromGetRequest()
{
var mockCatalog = new MockProductCatalog(); // Hand rolled mock here.
mockCatalog.findShouldReturn(new Product("some-sku-from-url"));
var mockResponse = new MockHttpResponse(); // Hand rolled mock here.
var catalogService = new ProductCatalogService(mockCatalog, mockResponse);
catalogService.find("some-sku-from-url")
mockCatalog.assertWriteWasCalledWith("{ 'sku': 'some-sku-from-url' }");
}
}
You've now tested end to end, and test drove the whole thing. I personally would test drive the business logic contained in ProductCatalog and likely skip testing the marshaling as it's likely to all be done by frameworks anyway and it takes little code to tie the controllers into the product catalog. Your mileage may vary.
Finally, while test driving the catalog, I would expect the code to be split into multiple classes and mocking comes into play there so they would be unit tested, not a large integration test. Again, that's a topic for another day.
Hope that helps!
Brandon
Well to answer your question what I would do is to write the test calling the rest service and use something like Rhino Mocks to arrange (i.e setup an expectation for the call), act (actually run the code which calls the unit to be tested and assert that you get back what you expect. You could mock out the expected results of the rest call. An actual test of the rest service from front to back would be an integration test not a unit test.
So to be clearer the unit test you need to write is a test around what actually calls the rest web service in the business logic...
Like this is your proposed implementation (lets pretend this hasn't even been written)
public class SomeClass
{
private IWebServiceProxy proxy;
public SomeClass(IWebServiceProxy proxy)
{
this.proxy = proxy;
}
public void PostTheProduct()
{
proxy.Post("/MyService/Product");
}
public void REstGetCall()
{
proxy.Get("/MyService/Product/{productId}");
}
}
This is one of the tests you might consider writing.
[TestFixture]
public class TestingOurCalls()
{
[Test]
public Void TestTheProductCall()
{
var webServiceProxy = MockRepository.GenerateMock<IWebServiceProxy>();
SomeClass someClass = new SomeClass(webServiceProxy);
webServiceProxy.Expect(p=>p.Post("/MyService/Product"));
someClass.PostTheProduct(Arg<string>.Is.Anything());
webServiceProxy.VerifyAllExpectations();
}
}

Ncrunch all test pass first run, but fails after code change and when run all button is pressed

I'm running ncrunch, in a new MVC 4 solution in VS2012 using nunit and ninject.
When I first open the solution all 50 or so test run and pass successfully.
After I make any code change (even just a added empty space) ncrunch reports that most of my unit test fail. The same thing happens if I press the 'run all tests' in the ncrunch window.
But if you hit the 'Run all tests visible here' button all 50 test pass again and ncrunch reports everything is fine.
Also when you run each test individually they pass every time.
When they do fail they seem to be failing in my ninject setup code
Error: TestFixtureSetUp failed in ControllerTestSetup
public class ControllerTestSetup
{
[SetUp]
public void InitIntegrationTest()
{
var context = IntegrationTestContext.Instance;
context.Init();
context.NinjectKernel.Load<MediGapWebTestModule>();
}
[TearDown]
public void DisposeIntegrationTest()
{
IntegrationTestContext.Instance.Dispose();
}
}
public class IntegrationTestContext : IDisposable
{
private static IntegrationTestContext _instance = null;
private static readonly object _monitor = new object();
private IntegrationTestContext() { }
public static IntegrationTestContext Instance
{
get
{
if (_instance == null)
{
lock (_monitor)
{
if (_instance == null)
{
_instance = new IntegrationTestContext();
}
}
}
return _instance;
}
}
}
All the test also run in the resharper test runner without problems every time.
Does anyone know what could be causing this?
I guessing its something to do with the singleton lock code inside the Instance property but I'm not sure.
==============================================================================
Progress:
I was able to track this down to a error in the ninject setup method above by wrapping it in a try catch statement, and writing the error to the output window.
The exception was caused by trying to load a module more than once, even tho I definitely haven't and I don't use any type of automatic module loading.
This happen on the lines
LocalSessionFactoryModule.SetMappingAssemblies(() => new[] { typeof(ProviderMap).Assembly });
_kernel.Load<LocalSessionFactoryModule>();
_sessionFactory = _kernel.Get<ISessionFactory>();
where LocalSessionFactoryModule is the ninject module class derived for NinjectModule class.
Why is this only happening with ncrunch and what can I do to solve this issue? Is there a way to check if a module has already been loaded?
NCrunch will never execute tests concurrency within the same process, so unless you have multi-threaded behaviour inside your test logic, then it should be safe to say that this isn't an issue caused by the locking or threading over the singleton.
As you've already tried disabling parallel execution and this hasn't made a difference, I'm assuming that the problem wouldn't be caused by concurrent use of resources outside the test runner process (i.e. files on disk).
This means that the problem is almost certainly related to the sequence in which the tests are being executed. Almost all manual test runners (including Resharper) will run tests in a defined sequence from start to finish. This is good for consistency, but it can mask problems that may surface when the tests are run in an inconsistent/random order. NCrunch will execute tests in order of priority and can also reuse test processes between test runs, which can make the runtime behaviour of your tests different if they haven't been designed with this in mind.
A useful way to surface (and thus debug) sequence related issues is to try running your tests in a manually defined order by using NCrunch. If you right-click a test inside the NCrunch Tests Window, under the 'Advanced' menu you'll find an option to run a test using an existing task runner process. Try this action against several of your tests to see if you can reproduce the sequence that surfaces the problem. When it happens, you should easily be able to get a debugger onto the test and find out why it is failing.
Most sequence related problems are caused by uncleared static members, so make sure each of your tests is written in the assumption that existing state may be left behind by another test that has been run within the process. Another option is to ensure all state is fully cleared by tests on tear down (although in my opinion, this is often a less pragmatic approach).

Unit-tests and validation logic

I am currently writing some unit tests for a business-logic class that includes validation routines. For example:
public User CreateUser(string username, string password, UserDetails details)
{
ValidateUserDetails(details);
ValidateUsername(username);
ValidatePassword(password);
// create and return user
}
Should my test fixture contain tests for every possible validation error that can occur in the Validate* methods, or is it better to leave that for a separate set of tests? Or perhaps the validation logic should be refactored out somehow?
My reasoning is that if I decide to test for all the validation errors that can occur within CreateUser, the test fixture will become quite bloated. And most of the validation methods are used from more than one place...
Any great patterns or suggestions in this case?
Every test should only fail for one reason and only one test should fail for that reason.
This helps a lot with writing a maintainable set of unit tests.
I'd write a couple of tests each for ValidateUserDetails, ValidateUsername and ValidateUserPassword. Then you only need to test that CreateUser calls those functions.
Re read your question; Seems I misunderstood things a bit.
You might be interested in what J.P Boodhoo has written on his style of behaviour driven design.
http://blog.developwithpassion.com/2008/12/22/how-im-currently-writing-my-bdd-style-tests-part-2/
BDD is becoming a very overloaded term, everyone has a different definition and different tools to do it. As far as I see what JP Boodhoo is doing is splitting up test fixtures according to concern and not class.
For example you could create separate fixtures for testing Validation of user details, Validation of username, Validation of password and creating users. The idea of BDD is that by naming the testfixtures and tests the right way you can create something that almost reads like documentation by printing out the testfixture names and test names. Another advantage of grouping your tests by concern and not by class is that you'll probably only need one setup and teardown routine for each fixture.
I havn't had much experience with this myself though.
If you're interested in reading more, JP Boodhoo has posted a lot about this on his blog (see above link) or you can also listen to the dot net rocks episode with Scott Bellware where he talks about a similar way of grouping and naming tests http://www.dotnetrocks.com/default.aspx?showNum=406
I hope this is more what you're looking for.
You definitely need to test validation methods.
There is no need to test other methods for all possible combinations of arguments just to make sure validation is performed.
You seem to be mixing Validation and Design by Contract.
Validation is usually performed to friendly notify user that his input is incorrect. It is very related to business logic (password is not strong enough, email has incorrect format, etc.).
Design by Contract makes sure your code can execute without throwing exceptions later on (even without them you would get the exception, but much later and probably more obscure one).
Regarding application layer that should contain validation logic, probably the best is service layer (by Fowler) which defines application boundaries and is a good place to sanitize application input. And there should not be any validation logic inside this boundaries, only Design By Contract to detect errors earlier.
So finally, write validation logic tests when you want to friendly notify user that he has mistaken. Otherwise use Design By Contract and keep throwing exceptions.
Let Unit Tests (plural) against the Validate methods confirm their correct functioning.
Let Unit Tests (plural) against the CreateUser method confirm its correct functioning.
If CreateUser is merely required to call the validate methods, but is not required to make validation decisions itself, then the tests against CreateUser should confirm that requirement.
What is the responsibility of your business logic class and does it do something apart from the validation? I think I'd be tempted to move the validation routines into a class of its own (UserValidator) or multiple classes (UserDetailsValidator + UserCredentialsValidator) depending on your context and then provide mocks for the tests. So your class now would look something like:
public User CreateUser(string username, string password, UserDetails details)
{
if (Validator.isValid(details, username, password)) {
// what happens when not valid
}
// create and return user
}
You can then provide seperate unit tests purely for the validation and your tests for the business logic class can focus on when validation passes and when validation fails, as well as all your other tests.
I would add a bunch of test for each ValidateXXX method. Then in CreateUser create 3 test cases for checking what happens when each of ValidateUserDetails, ValidateUsername and ValidatePassword fails but the other succeed.
I'm using Lokad Shared Library for defining business validation rules. Here's how I test corner cases (sample from the open-source):
[Test]
public void Test()
{
ShouldPass("rinat.abdullin#lokad.com", "pwd", "http://ws.lokad.com/TimeSerieS2.asmx");
ShouldPass("some#nowhere.net", "pwd", "http://127.0.0.1/TimeSerieS2.asmx");
ShouldPass("rinat.abdullin#lokad.com", "pwd", "http://sandbox-ws.lokad.com/TimeSerieS2.asmx");
ShouldFail("invalid", "pwd", "http://ws.lokad.com/TimeSerieS.asmx");
ShouldFail("rinat.abdullin#lokad.com", "pwd", "http://identity-theift.com/TimeSerieS2.asmx");
}
static void ShouldFail(string username, string pwd, string url)
{
try
{
ShouldPass(username, pwd, url);
Assert.Fail("Expected {0}", typeof (RuleException).Name);
}
catch (RuleException)
{
}
}
static void ShouldPass(string username, string pwd, string url)
{
var connection = new ServiceConnection(username, pwd, new Uri(url));
Enforce.That(connection, ApiRules.ValidConnection);
}
Where ValidConnection rule is defined as:
public static void ValidConnection(ServiceConnection connection, IScope scope)
{
scope.Validate(connection.Username, "UserName", StringIs.Limited(6, 256), StringIs.ValidEmail);
scope.Validate(connection.Password, "Password", StringIs.Limited(1, 256));
scope.Validate(connection.Endpoint, "Endpoint", Endpoint);
}
static void Endpoint(Uri obj, IScope scope)
{
var local = obj.LocalPath.ToLowerInvariant();
if (local == "/timeseries.asmx")
{
scope.Error("Please, use TimeSeries2.asmx");
}
else if (local != "/timeseries2.asmx")
{
scope.Error("Unsupported local address '{0}'", local);
}
if (!obj.IsLoopback)
{
var host = obj.Host.ToLowerInvariant();
if ((host != "ws.lokad.com") && (host != "sandbox-ws.lokad.com"))
scope.Error("Unknown host '{0}'", host);
}
If some failing case is discovered (i.e.: new valid connection url is added), then the rule and the test gets updated.
More on this pattern could be found in this article. Everything is Open Source so feel free to reuse or ask questions.
PS: note that primitive rules used in this sample composite rule (i.e. StringIs.ValidEmail or StringIs.Limited) are thoroughly tested on their own and thus do not need excessive unit tests.

Categories

Resources