WCF makes it easy to call services synchronously or asynchronously, regardless of how the service is implemented. To accommodate clients using ChannelFactory, services can even define separate sync/async contract interfaces. For example:
public interface IFooService
{
int Bar();
}
[ServiceContract(Name = "IFooService")]
public interface IAsyncFooService
{
Task<int> BarAsync();
}
This allows the client to reference either contract version, and WCF translates the actual API calls automatically.
One drawback to providing both contract versions is that they must be kept in-sync. If you forget to update one, the client may receive a contract mismatch exception at runtime.
Is there an easy way to unit test the interfaces to ensure they match from a WCF metadata perspective?
You can retrieve the ContractDescription and use WsdlExporter to generate the WSDL. The output MetadataSet is XML serializable, so you can compare the representations for each contract version to ensure they match:
[TestMethod]
public void ContractsMatch()
{
// Arrange
string expectedWsdl = this.GetContractString<IFooService>();
// Act
string actualWsdl = this.GetContractString<IAsyncFooService>();
// Assert
Assert.AreEqual(expectedWsdl, actualWsdl);
}
private string GetContractString<TContract>()
{
ContractDescription description = ContractDescription.GetContract(typeof(TContract));
WsdlExporter wsdlExporter = new WsdlExporter();
wsdlExporter.ExportContract(description);
if (wsdlExporter.Errors.Any())
{
throw new InvalidOperationException(string.Format("Failed to export WSDL: {0}", string.Join(", ", wsdlExporter.Errors.Select(e => e.Message))));
}
MetadataSet wsdlMetadata = wsdlExporter.GetGeneratedMetadata();
string contractStr;
StringBuilder stringBuilder = new StringBuilder();
using (XmlWriter xmlWriter = XmlWriter.Create(stringBuilder))
{
wsdlMetadata.WriteTo(xmlWriter);
contractStr = stringBuilder.ToString();
}
return contractStr;
}
Your own answer is great. As BartoszKP points out it is more of an integration test, but this might be the best fit. You could argue that comparing two units (interfaces) to each other is not a unit test by definition.
The advantage of your approach is that you can be sure to verify what WCF makes from your classes. If you only want to test your own code, you could do something like that:
[TestMethod]
public void ContractsMatch()
{
var asyncMethodsTransformed = typeof(IAsyncFooService)
.GetMethods()
.Select(mi => new
{
ReturnType = mi.ReturnType,
Name = mi.Name,
Parameters = mi.GetParameters()
});
var syncMethodsTransformed = typeof(IFooService)
.GetMethods()
.Select(mi => new
{
ReturnType = WrapInTask(mi.ReturnType),
Name = Asyncify(mi.Name),
Parameters = mi.GetParameters()
});
Assert.That(asyncMethodsTransformed, Is.EquivalentTo(syncMethodsTransformed));
}
The idea is that for each method in your IFooService you expect a method which has a similar signature with clearly defined transformations:
The name must contain a "Async" after the "I"
The return type must be a Task of the type found in the sync version.
The WrapInTask and Asyncify are left as exercise :-) If you like this suggestion I can expand on them.
By using a test like that you might constrain the code more than WCF does (I don't know the Async support very well). But even if it does you might want that to ensure some code consistency.
Related
I have a simple hub that I am trying to write a test for with FakeItEasy and the verification of calling the client is not passing. I have the example working in a separate project that uses MOQ and XUnit.
public interface IScheduleHubClientContract
{
void UpdateToCheckedIn(string id);
}
public void UpdateToCheckedIn_Should_Broadcast_Id()
{
var hub = new ScheduleHub();
var clients = A.Fake<IHubCallerConnectionContext<dynamic>>();
var all = A.Fake<IScheduleHubClientContract>();
var id= "123456789";
hub.Clients = clients;
A.CallTo(() => all.UpdateToCheckedIn(A<string>.Ignored)).MustHaveHappened();
A.CallTo(() => clients.All).Returns(all);
hub.UpdateToCheckedIn(id);
}
I'm using Fixie as the Unit Test Framework and it reports:
FakeItEasy.ExpectationException:
Expected to find it once or more but no calls were made to the fake object.
The sample below works in XUnit & MOQ:
public interface IScheduleClientContract
{
void UpdateToCheckedIn(string id);
}
[Fact]
public void UpdateToCheckedIn_Should_Broadcast_Id()
{
var hub = new ScheduleHub();
var clients = new Mock<IHubCallerConnectionContext<dynamic>>();
var all = new Mock<IScheduleClientContract>();
hub.Clients = clients.Object;
all.Setup(m=>m.UpdateToCheckedIn(It.IsAny<string>())).Verifiable();
clients.Setup(m => m.All).Returns(all.Object);
hub.UpdateToCheckedIn("id");
all.VerifyAll();
}
I'm not sure what I've missed in the conversion?
You're doing some steps in a weird (it looks to me, without seeing the innards of your classes) order, and I believe that's the problem.
I think your key problem is that you're attempting to verify that all.UpdateToCheckedIn must have happened before even calling hub.UpdateToCheckedIn. (I don't know for sure that hub.UpdateToCheckedIn calls all.UpdateToCheckedIn, but it sounds reasonable.
There's another problem, where you configure clients.Setup to return all.Object, which happens after you assert the call to all.UpdateToCheckedIn. I'm not sure whether that's necessary or not, but thought I'd mention it.
The usual ordering is
arrange the fakes (and whatever else you need)
act, but exercising the system under test (hub)
assert that expected actions were taken on the fakes (or whatever other conditions you deem necessary for success)
I would have expected to see something more like
// Arrange the fakes
var all = A.Fake<IScheduleHubClientContract>();
var clients = A.Fake<IHubCallerConnectionContext<dynamic>>();
A.CallTo(() => clients.All).Returns(all); // if All has a getter, this could be clients.All = all
// … and arrange the system under test
var hub = new ScheduleHub();
hub.Clients = clients;
// Act, by exercising the system under test
var id = "123456789";
hub.UpdateToCheckedIn(id);
// Assert - verify that the expected calls were made to the Fakes
A.CallTo(() => all.UpdateToCheckedIn(A<string>.Ignored)).MustHaveHappened();
There is this codebase where we use automapper and have 2 layers, Domain and Service. Each has its object for data representation, DomainItem and ServiceItem. The service gets data from domain, the uses constructor injected automapper instance to map
class Service
{
public ServiceItem Get(int id)
{
var domainItem = this.domain.Get(id);
return this.mapper.Map<DomainItem, ServiceItem>(domainItem);
}
}
Assume best practices, so mapper has no side-effects and no external dependencies. You'd write a static function to convert one object to another within seconds, just mapping fields.
With this in mind, is it a good practice to mock the mapper in unit tests like this?
[TestClass]
class UnitTests
{
[TestMethod]
public void Test()
{
var expected = new ServiceItem();
var mockDomain = new Mock<IDomain>();
// ... setup
var mockMapper = new Mock<IMapper>();
mockMapper.Setup(x => x.Map<DomainItem, ServiceItem>(It.IsAny<DomainItem>()))
.Returns(expected);
var service = new Service(mockDomain.Object, mockMapper.Object);
var result = service.Get(0);
Assert.AreEqual(expected, result);
}
}
To me, it seems that such unit test does not really bring any value, because it is effectively testing only the mocks, So i'd either not write it at all OR I'd use the actual mapper, not the mocked one. Am I right or do I overlook something?
I think the issue here is that the test is badly written for what it is actually trying to achieve which is testing Service.Get().
The way I would write this test is as follows:
[TestMethod]
public void Test()
{
var expected = new ServiceItem();
var mockDomain = new Mock<IDomain>();
var expectedDomainReturn = new DomainItem(0); //Illustrative purposes only
mockDomain.Setup(x => x.DomainCall(0)).Returns(expectedDomainReturn); //Illustrative purposes only
var mockMapper = new Mock<IMapper>();
mockMapper.Setup(x => x.Map<DomainItem, ServiceItem>(It.IsAny<DomainItem>()))
.Returns(expected);
var service = new Service(mockDomain.Object, mockMapper.Object);
var result = service.Get(0);
mockDomain.Verify(x => x.DomainCall(0), Times.Once);
mockMapper.Verify(x => x.Map<DomainItem, ServiceItem>(expectedDomainReturn), Times.Once);
}
This test instead of not really checking the functionality of the service.Get(), checks that the parameters passed are correct for the individual dependency calls based on the responses. You are thus not testing AutoMapper itself and should not need to.
Checking result is basically useless but will get the code coverage up.
I have some functionality, which depends on command line arguments, and different arguments should lead to different results.
I can't directly "simulate" this arguments, since there are some sort of chain dependencies - I need to unit-test some xaml control, which depends on view-model, which depends on certain additional class, which fetches command line arguments using Environment.GetCommandLineArgs, and I can't directly impact on this last class to set arguments manually instead of using GetCommandLineArgs.
So, I'd like to know, is there any way to make Environment.GetCommandLineArgs return value I want it to return, for certain unit-test.
You need to abstract Environment.GetCommandLineArgs or what ever is eventually calling it behind something you can mock
public interface ICommandLineInterface {
string[] GetCommandLineArgs();
}
Which can eventually be implemented in a concrete class like
public class CommandInterface : ICommandLineInterface {
public string[] GetCommandLineArgs() {
return Environment.GetCommandLineArgs();
}
}
And can be Tested using Moq and FluentAssertions
[TestMethod]
public void Test_Should_Simulate_Command_Line_Argument() {
// Arrange
string[] expectedArgs = new[] { "Hello", "World", "Fake", "Args" };
var mockedCLI = new Mock<ICommandLineInterface>();
mockedCLI.Setup(m => m.GetCommandLineArgs()).Returns(expectedArgs);
var target = mockedCLI.Object;
// Act
var args = target.GetCommandLineArgs();
// Assert
args.Should().NotBeNull();
args.Should().ContainInOrder(expectedArgs);
}
Since you are dealing with environment variables, why don't we wrap the outside dependencies into one EnvironmentHelper class, then inject the dependencies?
Here is my suggestion:
public class EnvironmentHelper
{
Func<string[]> getEnvironmentCommandLineArgs;
// other dependency injections can be placed here
public EnvironmentHelper(Func<string[]> getEnvironmentCommandLineArgs)
{
this.getEnvironmentCommandLineArgs = getEnvironmentCommandLineArgs;
}
public string[] GetEnvironmentCommandLineArgs()
{
return getEnvironmentCommandLineArgs();
}
}
Here is the Mock method:
public static string[] GetFakeEnvironmentCommandLineArgs()
{
return new string[] { "arg1", "arg2" };
}
In your source code:
EnvironmentHelper envHelper = new EnvironmentHelper(Environment.GetCommandLineArgs);
string[] myArgs = envHelper.GetEnvironmentCommandLineArgs();
In your unit test code:
EnvironmentHelper envHelper = new EnvironmentHelper(GetFakeEnvironmentCommandLineArgs);
string[] myArgs = envHelper.GetEnvironmentCommandLineArgs();
You can do it much more easier with Typemock Isolator.
It allows to mock not only interfaces, so. Take a look:
[TestMethod, Isolated]
public void TestFakeArgs()
{
//Arrange
Isolate.WhenCalled(() => Environment.GetCommandLineArgs()).WillReturn(new[] { "Your", "Fake", "Args" });
//Act
string[] args = Environment.GetCommandLineArgs();
//Assert
Assert.AreEqual("Your", args[0]);
Assert.AreEqual("Fake", args[0]);
Assert.AreEqual("Args", args[0]);
}
Mocking Environment.GetCommandLineArgs() took only one line:
Isolate.WhenCalled(() => Environment.GetCommandLineArgs()).WillReturn(new[] { "Your", "Fake", "Args" });
And you don't need to create new Interfaces and to change production code.
Hope it helps!
If you want something unit-testable it should have its dependencies on a abstraction that is at least as strict as its implementation.
Usually you'd get the dependencies through your constructor of your class or a property method. Constructor is preferred, generally, because now a consumer of your class knows at compile-time what dependencies are needed.
public void int Main(string[] args)
{
// Validate the args are valid (not shown).
var config = new AppConfig();
config.Value1 = args[0];
config.Value2 = int.Parse(args[1]);
// etc....
}
public class MyService()
{
private AppConfig _config;
public MyService(AppConfig config)
{
this._config = config;
}
}
I normally don't put a config object behind an interface because it only has data - which is serializable. As long as it has no methods, then I shouldn't need to replace it with a subclass with override-d behavior. Also I can just new it up directly in my tests.
Also, I've never ran into a situation when I wanted to depend on an abstraction of the command line arguments themselves to a service - why does it need to know it's behind a command-line? The closest I've gotten is use PowerArgs for easy parsing, but I'll consume that right in Main. What I normally do is something like maybe read in the port number for a web server on the command-line arguments (I let the user of the app choose so that I can run multiple copies of my web server on the same machine - maybe different versions or so I can run automated tests while I'm debugging and not conflict ports), parse them directly in my Main class. Then in my web server I depend on the parsed command-line arguments, in this case an int. That way the fact that the configuration is coming from a command-line is irrelevant - I can move it to an App.config file later (which is also basically bound to the lifecycle of the process) if I prefer - then I can extract common configuration to configSource files.
Instead of depending on an abstraction for command-line in general (which each service consuming would have to re-parse if you kept it pure), I usually abstract the command-line and App.config dependencies to a strongly-typed object - maybe an app-level config class and a test-level config class and introduce multiple configuration objects as needed - (the app wouldn't necessarily care about this, while the E2E test infrastructure would need this in a separate part of the App.config: where do I grab the client static files from, where do I grab the build scripts in a test or developer environment to auto-generate/auto-update an index.html file, etc.).
I am attempting to follow this example and leverage a shim to remove the external dependency on a WCF service call which is called from the method I am executing the unit test on. Unlike the example, I generate my WCF client on the fly using code similar to this:
ChannelFactory<IReportBroker> factory = new ChannelFactory<IReportBroker>("ReportBrokerBasicHttpStreamed", new EndpointAddress(this.CurrentSecurityZoneConfigurationManager.ConfigurationSettings[Constants.ConfigurationKeys.ReportBrokerServiceUrl]));
IReportBroker proxy = factory.CreateChannel();
proxy.Execute(requestMessage))
How do I adapt that example to shim the proxy returned back by the CreateChannel method? I am assuming that in the ShimWCFService class, I need to add something like....
ShimChannelFactory<TService>.AllInstances.CreateChannel = (var1) => { return [instance of a mock object]};
However, I am unsure of how to associate a mock object of <TService> with that shim as the return value.
You need to shim the factory for every type parameter. Assume you have the three Service contracts 'IService0' 'IService1' and 'IService2'.
Then you need to setup the shims like this:
ShimChannelFactory<IService0>.AllInstances.CreateChannel = (_) => { return new Service0Mock(); }
ShimChannelFactory<IService1>.AllInstances.CreateChannel = (_) => { return new Service1Mock(); }
ShimChannelFactory<IService2>.AllInstances.CreateChannel = (_) => { return new Service2Mock(); }
I'm working on a client-server application (.NET 4, WCF) that must support backwards compatibility. In other words, old clients should be compatible with new servers and vice versa. As a result, our client code is littered with statements such as:
if (_serverVersion > new Version(2, 1, 3))
{
//show/hide something or call method Foo()...
}
else
{
//show/hide something or call method Foo2()...
}
Obviously, this becomes somewhat of a maintenance nightmare. Luckily, we're allowed to break backwards compatibility with every minor release. When we get to a point where compatibility can be broken, I'd like to clean up the code that's in the form of the example above.
My questions:
(1) Is there a way to easily identify code blocks such as these when they are no longer "valid"? My initial thoughts were to somehow conditionally apply an Obsolete attribute based on the assembly's version. When we get to a new minor version, the Obsolete attribute would "kick-in", and all of a sudden we would have several compiler warnings pointing to these code blocks... Has anyone done anything like this? Or is there a better way to manage this?
(2) I cringe every time I see hard-coded versions such as new Version(2, 1, 3). What makes things worse is that during development, we don't know the final Version that's being released, so the version checks are based on the current build number + 1 when the developer adds the check. Although this works, it's not very clean. Any ideas on how this could be improved?
Thanks!
I would suggest at least creating a method where you can do the logic like this:
public static class ServerUtilities
{
public static bool IsValidToRun(Version desiredVersion)
{
if (_serverVersion >= desiredVersion)
return true;
else if (/* your other logic to determine if they're in some acceptable range */)
return true;
return false;
}
}
Then, use like this:
if (ServerUtilities.IsValidToRun(new Version(2, 1, 3)))
{
// Do new logic
}
else
{
// Do old logic
}
If you need to centralize the versions, have a static repository of features to version mapping, so that you can call:
if (ServerUtilities.IsValidToRun(ServerFeatures.FancyFeatureRequiredVersion))
{
...
}
public static class ServerFeatures
{
public static Version FancyFeatureRequiredVersion
{
get { return new Version(2, 1, 3); }
}
}
An alternative would be to implement versioning of your service contracts: at that point you could leverage WCF's own features to ignore minor changes which do not break the client, as listed on this Versioning Strategies page.
In Figure 1 you can see that when adding new parameters to an operation signature, removing parameters from an operation signature and adding new operations the client is unaffected.
In case there are still breaking changes or your client has to support both versions (please correct me if I'm wrong since I don't know your deploying strategy), you could offer different versions of the service on different endpoints and have a WCF client factory in your client code, which could then be configured to return the client for the appropriate endpoint.
At this point you have isolated the different implementations in different clients, which is probably cleaner and less a maintenance nightmare than it is now.
Very basic sample implementation to clear things up: suppose that we have two different contracts for our service, an old one and a new one.
[ServiceContract(Name = "Service", Namespace = "http://stackoverflow.com/2012/03")]
public interface IServiceOld
{
[OperationContract]
void DoWork();
}
[ServiceContract(Name = "Service", Namespace = "http://stackoverflow.com/2012/04")]
public interface IServiceNew
{
[OperationContract]
void DoWork();
[OperationContract]
void DoAdditionalWork();
}
Note how both services have the same name but different namespaces.
Let's handle the issue of having a client that has to be able to support both the extended and new service and the old one. Let's assume that we want to call the DoAdditionalWork method when we previously just called DoWork, and that we want to handle the situation client-side, because hypothetically DoAdditionalWork could require some extra arguments from the client. Then the configuration of the service could be something like this:
<service name="ConsoleApplication1.Service">
<endpoint address="http://localhost:8732/test/new" binding="wsHttpBinding" contract="ConsoleApplication1.IServiceNew" />
<endpoint address="http://localhost:8732/test/old" binding="wsHttpBinding" contract="ConsoleApplication1.IServiceOld" />
...
</service>
Fine, we have the service side, now to the interesting part: we would like to communicate with the services using the same interface. In this case I will use the old one, but you could need to put an adapter in between. Ideally, in our client code, we would do something like this:
IServiceOld client = *Magic*
client.DoWork();
The magic in this case is a simple factory like this:
internal class ClientFactory
{
public IServiceOld GetClient()
{
string service = ConfigurationManager.AppSettings["Service"];
if(service == "Old")
return new ClientOld();
else if(service == "New")
return new ClientNew();
throw new NotImplementedException();
}
}
I delegated the decision of which client to use to the app.config, but you could insert your version check there. The implementation of ClientOld is just a regular WCF client for IServiceOld:
public class ClientOld : IServiceOld
{
private IServiceOld m_Client;
public ClientOld()
{
var factory = new ChannelFactory<IServiceOld>(new WSHttpBinding(), "http://localhost:8732/test/old");
m_Client = factory.CreateChannel();
}
public void DoWork()
{
m_Client.DoWork();
}
...
}
ClientNew instead implements the behavior we were wishing for, namely calling the DoAdditionalWork operation:
public class ClientNew : IServiceOld
{
private IServiceNew m_Client;
public ClientNew()
{
var factory = new ChannelFactory<IServiceNew>(new WSHttpBinding(), "http://localhost:8732/test/new");
m_Client = factory.CreateChannel();
}
public void DoWork()
{
m_Client.DoWork();
m_Client.DoAdditionalWork();
}
...
}
That's it, now our client can be used like in the following example:
var client = new ClientFactory().GetClient();
client.DoWork();
What have we achieved? The code using the client is abstracted from what additional work the actual WCF client has to do and the decision about which client to use is delegated to a factory. I hope some variation/expansion of this sample suits your needs.