I've been given a task of creating a restful web service with JSON formating using WCF with the below methods using TDD approach which should store the Product as a text file on disk:
CreateProduct(Product product)
GetAProduct(int productId)
URI Templates:
POST to /MyService/Product
GET to /MyService/Product/{productId}
Creating the service and its web methods are the easy part but
How would you approach this task with TDD? You should create a test before creating the SUT codes.
The rules of unit tests say they should also be independent and repeatable.
I have a number of confusions and issues as below:
1) Should I write my unit tests against the actual service implementation by adding a reference to it or against the urls of the service (in which case I'd have to host and run the service)? Or both?
2)
I was thinking one approach could be just creating one test method inside which I create a product, call the CreateProduct() method, then calling the GetAProduct() method and asserting that the product which was sent is the one that I have received. On TearDown() event I just remove the product which was created.
But the issues I have with the above is that
It tests more than one feature so it's not really a unit test.
It doesn't check whether the data was stored on file correctly
Is it TDD?
If I create a separate unit test for each web method then for example for calling GetAProduct() web method, I'd have to put some test data stored physically on the server since it can't rely on the CreateProduct() unit tests. They should be able to run independently.
Please advice.
Thanks,
I'd suggest not worrying about the web service end points and focus on behavior of the system. For the sake of this discussion I'll drop all technical jargon and talk about what I see as the core business problem you're trying to solve: Creating a Product Catalog.
In order to do so, start by thinking through what a product catalog does, not the technical details about how to do it. Use that as your starting points for your tests.
public class ProductCatalogTest
{
[Test]
public void allowsNewProductsToBeAdded() {}
[Test]
public void allowsUpdatesToExistingProducts() {}
[Test]
public void allowsFindingSpecificProductsUsingSku () {}
}
I won't go into detail about how to implement the tests and production code here, but this is a starting point. Once you've got the ProductCatalog production class worked out, you can turn your attention to the technical details like making a web service and marshaling your JSON.
I'm not a .NET guy, so this will be largely pseudocode, but it probably winds up looking something like this.
public class ProductCatalogServiceTest
{
[Test]
public void acceptsSkuAsParameterOnGetRequest()
{
var mockCatalog = new MockProductCatalog(); // Hand rolled mock here.
var catalogService = new ProductCatalogService(mockCatalog);
catalogService.find("some-sku-from-url")
mockCatalog.assertFindWasCalledWith("some-sku-from-url");
}
[Test]
public void returnsJsonFromGetRequest()
{
var mockCatalog = new MockProductCatalog(); // Hand rolled mock here.
mockCatalog.findShouldReturn(new Product("some-sku-from-url"));
var mockResponse = new MockHttpResponse(); // Hand rolled mock here.
var catalogService = new ProductCatalogService(mockCatalog, mockResponse);
catalogService.find("some-sku-from-url")
mockCatalog.assertWriteWasCalledWith("{ 'sku': 'some-sku-from-url' }");
}
}
You've now tested end to end, and test drove the whole thing. I personally would test drive the business logic contained in ProductCatalog and likely skip testing the marshaling as it's likely to all be done by frameworks anyway and it takes little code to tie the controllers into the product catalog. Your mileage may vary.
Finally, while test driving the catalog, I would expect the code to be split into multiple classes and mocking comes into play there so they would be unit tested, not a large integration test. Again, that's a topic for another day.
Hope that helps!
Brandon
Well to answer your question what I would do is to write the test calling the rest service and use something like Rhino Mocks to arrange (i.e setup an expectation for the call), act (actually run the code which calls the unit to be tested and assert that you get back what you expect. You could mock out the expected results of the rest call. An actual test of the rest service from front to back would be an integration test not a unit test.
So to be clearer the unit test you need to write is a test around what actually calls the rest web service in the business logic...
Like this is your proposed implementation (lets pretend this hasn't even been written)
public class SomeClass
{
private IWebServiceProxy proxy;
public SomeClass(IWebServiceProxy proxy)
{
this.proxy = proxy;
}
public void PostTheProduct()
{
proxy.Post("/MyService/Product");
}
public void REstGetCall()
{
proxy.Get("/MyService/Product/{productId}");
}
}
This is one of the tests you might consider writing.
[TestFixture]
public class TestingOurCalls()
{
[Test]
public Void TestTheProductCall()
{
var webServiceProxy = MockRepository.GenerateMock<IWebServiceProxy>();
SomeClass someClass = new SomeClass(webServiceProxy);
webServiceProxy.Expect(p=>p.Post("/MyService/Product"));
someClass.PostTheProduct(Arg<string>.Is.Anything());
webServiceProxy.VerifyAllExpectations();
}
}
Related
I am having troubles when testing a controller, because there are some lines at my Startup that are null when testing, I want to add a condition for run this lines only if it's not testing.
// Desired method that retrieves if testing
if (!this.isTesting())
{
SwaggerConfig.ConfigureServices(services, this.AuthConfiguration, this.ApiMetadata.Version);
}
The correct answer (although of no help): It should not be able to tell so. The application should to everything it does unaware if it is in productino or test.
However to test the application in a simpler setting, you can use fake modules or mock-up modules that are loaded instead of the heavy-weight production modules.
But in order to use that, you have to refactor your solution and use injection for instance.
Some links I found:
Designing with interfaces
Mock Objects
Some more on Mock objects
It really depends on which framework you use for testing. It can be MSTest, NUnit or whatever.
Rule of thumb, is that your application should not know whether it is tested. It means everything should be configured before actual testing through injection of interfaces. Simple example of how tests should be done:
//this service in need of tests. You must test it's methods.
public class ProductionService: IProductionService
{
private readonly IImSomeDependency _dep;
public ImTested(IImSomeDependency dep){ _dep = dep; }
public void PrintStr(string str)
{
Console.WriteLine(_dep.Format(str));
}
}
//this is stub dependency. It contains anything you need for particular test. Be it some data, some request, or just return NULL.
public class TestDependency : IImSomeDependency
{
public string Format(string str)
{
return "TEST:"+str;
}
}
//this is production, here you send SMS, Nuclear missle and everything else which cost you money and resources.
public class ProductionDependency : IImSomeDependency
{
public string Format(string str)
{
return "PROD:"+str;
}
}
When you run tests you configure system like so:
var service = new ProductionService(new TestDependency());
service.PrintStr("Hello world!");
When you run your production code you configure it like so:
var service = new ProductionService(new ProductionDependency());
service.PrintStr("Hello world!");
This way ProductionService is just doing his work, not knowing about what is inside it's dependencies and don't need "is it testing case №431" flag.
Please, do not use test environment flags inside code if possible.
UPDATE:
See #Mario_The_Spoon explanation for better understanding of dependency management.
I want to test my Class, which calls the third Party Webservice. Is it possible to use FakeItEasy for this?
Wenn I try to Fake the Class from Reference.cs (auto generated), UnitTest started and doesn't come back.
Reference.cs(auto generated)
[System.Diagnostics.DebuggerStepThroughAttribute()]
[System.CodeDom.Compiler.GeneratedCodeAttribute("System.ServiceModel", "4.0.0.0")]
public partial class ws_AccessoryClient : System.ServiceModel.ClientBase<AccessoryService.ws_Accessory>,
AccessoryService.ws_Accessory
{
public ws_AccessoryClient()
{
}
public ws_AccessoryClient(string endpointConfigurationName) :
base(endpointConfigurationName)
{
}
public AccessoryService.ResponseMessageOf_ListOf_SomeMethodInfo SomeMethod(
AccessoryService.RequestMessageOf_SomeMethod request)
{
return base.Channel.SomeMethod(request);
}
}
Test.cs
[Test]
public void DoBusinessLogicTryTest()
{
var accessoryProxy = A.Fake<ws_AccessoryClient>();
}
As has been mentioned you may not want to do what you are purposing for Unit Testing as this would cause more noise than is necessary for a Unit Test which could used mocked interfaces. However it is a valid approach for integration testing, this would allow you to test that your WCF wiring is working as you expect it. It also allows you to test your application as whole if you are adopting a more behaviour driven style of testing where you want to mock as little as possible.
I use this approach myself for spinning up fake endpoints using NSubstitute which is covered in my blog Hosting a Mock as a WCF service. The main things you need to do is spin up a ServiceHost, give it the endpoint address you want to use, set the context mode to single and provide the mock you want to use as the endpoint.
var serviceHost = new ServiceHost(mock, new[] { baseAddress });
serviceHost.Description.Behaviors
.Find<ServiceDebugBehavior>().IncludeExceptionDetailInFaults = true;
serviceHost.Description.Behaviors
.Find<ServiceBehaviorAttribute>().InstanceContextMode = InstanceContextMode.Single;
serviceHost.AddServiceEndpoint(typeof(TMock), new BasicHttpBinding(), endpointAddress);
One thing that I do in my testing is randomly choose the port that I host the endpoint on and inject the address into my application during testing. That way your tests will be able to run on other machines and build servers without clashing with other ports in use.
After looking at your example you might want to consider using the WCF ChannelFactory to create your client instead of using a concrete proxy client class. The ChannelFactory creates a proxy on the fly using the Interface you provide and allowing you to inject the proxy into its dependencies using the service interface. This would make unit testing easier and give you a more decoupled design.
You cannot (and why would you want to?).
If you want to verify that your class under test makes the call to the service, then wrap the service call in a class who's only job it is to call the service, and define it with an interface.
interface ICallTheService
{
void CallTheService();
}
class ServiceCaller : ICallTheService
{
void CallTheService()
{
// Call the service...
}
}
Then you can fake this class and verify that your class under test invokes the CallTheService operation.
// fake the service caller and pass it into your service
var serviceCaller = A.Fake<ICallTheService>();
// Verify invocation
A.CallTo(() => serviceCaller.CallTheService()).MustHaveHappened();
I want to test the logic in my class, depends on Response from
WCF-Service
This is where I think you're going wrong with separation of concerns. Your test is called DoBusinessLogicTryTest, yet it has a dependency on System.ServiceModel, which is an infrastructure concern. Your business logic should be testable without this dependency. If your class under test needs to behave differently depending on the response, you could do something like this:
interface ICallTheService
{
ServiceResponseModel CallTheService();
}
enum ServiceResponseModel
{
Success,
PartialSuccess,
FailureCondition1,
FailureCondition2,
// etc...
}
Then you can prime the ICallTheService fake to return each of the possible responses and test your class based on this.
A.CallTo(() => serviceCaller.CallTheService()).Returns(ServiceResponseModel.Success);
For Example if some Exceptions (defined in WCF) are handled correct
This is also nothing to do with business logic. The actual handling of exceptions is the responsibility of the ICallTheService implementation. In fact, I would introduce another class for this, whose job it would be to translate the various possible exceptions from System.ServiceModel into your response model. Eg
class WCFErrorResponseTranslator
{
ServiceResponseModel TranslateWCFException (Exception ex)
{
if (ex.GetType() == typeOf(TimeoutException)) { return ServiceResponseModel.TimeOut; }
/// etc
}
}
This behaviour could then be tested in isolation.
I'm still trying to follow the path to TDD.
Let's say I have a SunSystemBroker triggered when a file is uploaded to a Shared Folder. This broker is designed to open this file, extract records from it, try to find associated payments in another systems and finally to call a workflow!
If I want to follow TDD to develop the IBroker.Process() method how shall i do?
Note: Broker are independent assemblies inheriting from IBroker and loaded by a console app (like plugins).
This console is in charge of triggering each broker!
public interface IFileTriggeredBroker : IBroker
{
FileSystemTrigger Trigger { get; }
void Process(string file);
}
public class SunSystemPaymentBroker : IFileTriggeredBroker
{
private readonly IDbDatasourceFactory _hrdbFactory;
private readonly IExcelDatasourceFactory _xlFactory;
private readonly IK2DatasourceFactory _k2Factory;
private ILog _log;
public void Process(string file)
{
(...)
// _xlFactory.Create(file) > Extract
// _hrdbFactory.Create() > Find
// Compare Records
// _k2Factory.Create > Start
}
}
Each method are tested individually.
Thank you
Seb
Given that you say each method:
_xlFactory.Create(file);
_hrdbFactory.Create();
// Compare Records
_k2Factory.Create();
is tested individually, there is very little logic to test within Process(file).
If you use something like Moq, you can check that the calls occur:
// Arrange
const string File = "file.xlsx";
var xlFactory = new Mock<IExcelDatasourceFactory>();
var hrbdFactory = new Mock<IDbDatasourceFactory>();
var k2Factory = new Mock<IK2DatasourceFactory>();
// Act
var sut = new SunSystemPaymentBroker(xlFactory.Object, hrdbFactory.Object, k2Factory.Object); // I'm assuming you're using constructor injection
sut.ProcessFile(File);
// Assert
xlFactory.Verify(m => m.Create(File), Times.Once);
hrbdFactory.Verify(m => m.Create(), Times.Once);
k2Factory.Verify(m => m.Create(), Times.Once);
For brevity, I've done this as a single test, but breaking into 3 tests with a single "assert" (the verify calls) is more realistic. For TDD you would write each test, before wiring up that method within Process(file).
You may also want to look at having a larger, integration level tests, where you pass in concrete versions of IExcelDatasourceFactory, IK2DatasourceFactory, IDbDatasourceFactory and exercise the system in more depth.
In the book Growing Object-Oriented Software Guided by Tests, this would be defined as an Acceptance Test which would be written before work began, and failing whilst the feature is added in smaller TDD loops of functionality, which work toward the overall feature.
you have two different issues :
1) a method is designed to perform many task
Make your code SOLID, and apply the single responsibility principle.
Split with single responsibility methods : ie responsible for only one task.
2) you want to test a procedure that works by side effect (change environment), not a pure function.
So, I would advice you to split your code in pure functions calls (ie : no side effects).
Read also https://msdn.microsoft.com/en-us/library/aa730844%28v=vs.80%29.aspx
Say I have some global application setup code, defined in my Global.asax, Application_Start. For example to disable certificate checking:
public class WebApiApplication : System.Web.HttpApplication
{
protected void Application_Start()
{
// (...)
ServicePointManager.ServerCertificateValidationCallback = delegate { return true; };
// (...)
}
}
Say I also have a unit test, which depends on the code above to be run. However in my unit test, Application_Start is not called as I'm instantiating the controller directly:
var controller = new TestSubjectController();
Is there some mechanism in ASP.NET or Web API that solves this problem? What would be the best way to define the setup code, preventing duplication in my code?
Research
I've went through multiple questions on SO already. Most of them focus on unit testing Application_Start itself, however that's not the goal here. Other questions tend to look at testing using the applications external (HTTP) interface, however I'd like being able to instantiate the controller directly in my unit tests.
In addition to Batavia's suggestion you could use a [TestInitialize] attributed method or your unit testing framework of choices's equivalent to call the common static method for all tests in a certain class which would reduce the duplication you're concerned about.
I have a Question for you guys.I have 2 unit tests which are calling webservices .The value that one unit-test returns should be used for another unit test method
Example
namespace TestProject1
{
public class UnitTest1
{
String TID = string.empty;
public void test1
{
//calling webservices and code
Assert.AreNotEqual(HoID, hID);
TID = hID;
}
public void test2
{
//calling webservices and code
string HID = TID // I need the TID value from the Above testcase here
Assert.AreNotEqual(HID, hID);
}
}
}
How can i store a value in one unittest and use that value in another unittest.
In general, you shouldn't write your tests like this. You cannot ensure that your tests will run in any particular order, so there's no nice way to do this.
Instead make the tests independent, but refactor the common part into it's own (non-test) method that you can call as part of your other test.
Don't reuse any values. Order in which tests are run is very often random (most common runners like NUnit's and Resharper's run tests in random order, some might even do so in parallel). Instead, simply call web service again (even if that means having 2 web service calls) in your second test and retrieve the value you need.
Each test (whether it's unit or integration) should have all the data/dependencies available for it to run. You should never rely on other tests to setup environment/data as that's not what they're written for.
Think of your tests in isolation - each single test is a separate being, that sets up, executes, and cleans up all that is necessary to exercise particular scenario.
Here's an example, following the outlines of Oleksi, of how you could organize this
String TID = string.empty;
[TestFixtureSetUp]
public void Given() {
//calling webservices and code
TID = hID;
//calling webservices and code
}
[Test]
public void assertions_call_1() {
...
}
public void assertions_on_call_2() {
if (string.IsNullOrEmpty(TID))
Assert.Inconclusive("Prerequisites for test not met");
...
}