"Simulate" command line argument in unit-tests - c#

I have some functionality, which depends on command line arguments, and different arguments should lead to different results.
I can't directly "simulate" this arguments, since there are some sort of chain dependencies - I need to unit-test some xaml control, which depends on view-model, which depends on certain additional class, which fetches command line arguments using Environment.GetCommandLineArgs, and I can't directly impact on this last class to set arguments manually instead of using GetCommandLineArgs.
So, I'd like to know, is there any way to make Environment.GetCommandLineArgs return value I want it to return, for certain unit-test.

You need to abstract Environment.GetCommandLineArgs or what ever is eventually calling it behind something you can mock
public interface ICommandLineInterface {
string[] GetCommandLineArgs();
}
Which can eventually be implemented in a concrete class like
public class CommandInterface : ICommandLineInterface {
public string[] GetCommandLineArgs() {
return Environment.GetCommandLineArgs();
}
}
And can be Tested using Moq and FluentAssertions
[TestMethod]
public void Test_Should_Simulate_Command_Line_Argument() {
// Arrange
string[] expectedArgs = new[] { "Hello", "World", "Fake", "Args" };
var mockedCLI = new Mock<ICommandLineInterface>();
mockedCLI.Setup(m => m.GetCommandLineArgs()).Returns(expectedArgs);
var target = mockedCLI.Object;
// Act
var args = target.GetCommandLineArgs();
// Assert
args.Should().NotBeNull();
args.Should().ContainInOrder(expectedArgs);
}

Since you are dealing with environment variables, why don't we wrap the outside dependencies into one EnvironmentHelper class, then inject the dependencies?
Here is my suggestion:
public class EnvironmentHelper
{
Func<string[]> getEnvironmentCommandLineArgs;
// other dependency injections can be placed here
public EnvironmentHelper(Func<string[]> getEnvironmentCommandLineArgs)
{
this.getEnvironmentCommandLineArgs = getEnvironmentCommandLineArgs;
}
public string[] GetEnvironmentCommandLineArgs()
{
return getEnvironmentCommandLineArgs();
}
}
Here is the Mock method:
public static string[] GetFakeEnvironmentCommandLineArgs()
{
return new string[] { "arg1", "arg2" };
}
In your source code:
EnvironmentHelper envHelper = new EnvironmentHelper(Environment.GetCommandLineArgs);
string[] myArgs = envHelper.GetEnvironmentCommandLineArgs();
In your unit test code:
EnvironmentHelper envHelper = new EnvironmentHelper(GetFakeEnvironmentCommandLineArgs);
string[] myArgs = envHelper.GetEnvironmentCommandLineArgs();

You can do it much more easier with Typemock Isolator.
It allows to mock not only interfaces, so. Take a look:
[TestMethod, Isolated]
public void TestFakeArgs()
{
//Arrange
Isolate.WhenCalled(() => Environment.GetCommandLineArgs()).WillReturn(new[] { "Your", "Fake", "Args" });
//Act
string[] args = Environment.GetCommandLineArgs();
//Assert
Assert.AreEqual("Your", args[0]);
Assert.AreEqual("Fake", args[0]);
Assert.AreEqual("Args", args[0]);
}
Mocking Environment.GetCommandLineArgs() took only one line:
Isolate.WhenCalled(() => Environment.GetCommandLineArgs()).WillReturn(new[] { "Your", "Fake", "Args" });
And you don't need to create new Interfaces and to change production code.
Hope it helps!

If you want something unit-testable it should have its dependencies on a abstraction that is at least as strict as its implementation.
Usually you'd get the dependencies through your constructor of your class or a property method. Constructor is preferred, generally, because now a consumer of your class knows at compile-time what dependencies are needed.
public void int Main(string[] args)
{
// Validate the args are valid (not shown).
var config = new AppConfig();
config.Value1 = args[0];
config.Value2 = int.Parse(args[1]);
// etc....
}
public class MyService()
{
private AppConfig _config;
public MyService(AppConfig config)
{
this._config = config;
}
}
I normally don't put a config object behind an interface because it only has data - which is serializable. As long as it has no methods, then I shouldn't need to replace it with a subclass with override-d behavior. Also I can just new it up directly in my tests.
Also, I've never ran into a situation when I wanted to depend on an abstraction of the command line arguments themselves to a service - why does it need to know it's behind a command-line? The closest I've gotten is use PowerArgs for easy parsing, but I'll consume that right in Main. What I normally do is something like maybe read in the port number for a web server on the command-line arguments (I let the user of the app choose so that I can run multiple copies of my web server on the same machine - maybe different versions or so I can run automated tests while I'm debugging and not conflict ports), parse them directly in my Main class. Then in my web server I depend on the parsed command-line arguments, in this case an int. That way the fact that the configuration is coming from a command-line is irrelevant - I can move it to an App.config file later (which is also basically bound to the lifecycle of the process) if I prefer - then I can extract common configuration to configSource files.
Instead of depending on an abstraction for command-line in general (which each service consuming would have to re-parse if you kept it pure), I usually abstract the command-line and App.config dependencies to a strongly-typed object - maybe an app-level config class and a test-level config class and introduce multiple configuration objects as needed - (the app wouldn't necessarily care about this, while the E2E test infrastructure would need this in a separate part of the App.config: where do I grab the client static files from, where do I grab the build scripts in a test or developer environment to auto-generate/auto-update an index.html file, etc.).

Related

Decoupling issue - improvements and alternatives

I'm into learning SOLID principles - especially Inversion Of Control-DI-Decoupling, and as I'm reviewing one of my codes, I noticed that this one method (see below) gets my attention.
This code will be called by any methods that needs to read the json file, accepts string values that will be used to lookup on a json file. But as you can see(I simplified the code - excluded the exception handling for the sake of this topic), I'm not sure where to start(there are a lot of initializations or dependencies?? happening and I'm not sure where to start).
Could this method/scenario a good candidate to start with? Which do you think should I retain? and needs to be decoupled?
Thanks.
public async Task<object> ReadJsonByKey(string jsonPath, string jsonKey)
{
// First - is it okay to have an initialization at this stage?
var value = new object();
// Second - is this fine to have this in the scope of this method?
using (TextReader reader = File.OpenText(jsonPath))
{
// Third - Calling Jobject that accepts new instance of JsonTextReader
var jObject = await JObject.LoadAsync(new JsonTextReader(reader));
obj = jObject.SelectToken(jsonKey);
}
return value;
}
The reason also I asked this is because (based from the standards ) loosely-coupled stuff can be easily tested - i.e, Unit Testing
[UnitTestSuite]
[TestCase1]
// Method should only be able to accept ".json" or ".txt" file
[TestCase2]
// JsonPath file is valid file system
[TestCase3]
// Method should be able to retrieve a node value based from a specific json and key
[TestCase4]
// Json-text file is not empty
It looks like you're trying to decouple an infrastructural concern from your application code.
Assuming that's the case you need a class which is responsible for reading the data:
public interface IDataReader
{
Task<object> ReadJsonByKey(string jsonPath, string jsonKey)
}
The implementation of which would be your above code:
public class DataReader : IDataReader
{
public async Task<object> ReadJsonByKey(string jsonPath, string jsonKey)
{
// First - is it okay to have an initialization at this stage?
var value = new object();
// Second - is this fine to have this in the scope of this method?
using (TextReader reader = File.OpenText(jsonPath))
{
// Third - Calling Jobject that accepts new instance of JsonTextReader
var jObject = await JObject.LoadAsync(new JsonTextReader(reader));
obj = jObject.SelectToken(jsonKey);
}
return value;
}
}
However this class is now doing both file reading & de-serialization so you could further separate into:
public class DataReader : IDataReader
{
IDeserializer _deserializer;
public DataReader(IDeserializer deserializer)
{
_deserializer = deserializer;
}
public async Task<object> ReadJsonByKey(string jsonPath, string jsonKey)
{
var json = File.ReadAllText(jsonPath);
return _deserializer.Deserialize(json, jsonKey);
}
}
This would mean that could now unit test your IDeserializer independently of the file system dependency.
However, the main benefit should be that you can now mock the IDataReader implementation when unit testing your application code.
Make the function like:
public async Task<object> ReadJsonByKey(TextReader reader, string jsonKey)
Now the function works with any TextReader implementation, so you can pass a TextReader that reads from file or from memory or from any other data source.
The only thing that prevents you from unit-testing this properly is the File reference, which is a static. You won't be able to provide the method with a file, because it would have to physically exist. There are two ways you can go about solving this.
First, if it's possible, you could pass something else rather than a path to the method - a FileStream for example.
Second, arguably better, you would abstract the file system (I recommend using the System.IO.Abstractions and then the related TestingHelpers package) into a private field, pass the dependency via ctor injection.
private readonly IFileSystem fileSystem;
public MyClass(IFileSystem fileSystem)
{
this.fileSystem = fileSystem;
}
And then in your method you'd use
fileSystem.File.OpenText(jsonPath);
This should allow you to unit-test this method with ease, by passing a MockFileSystem and creating a json file in-memory for the method to read. And unit-testability is actually a good indicator that your method is maintainable and has a well defined purpose - if you can test it easily with a not-so-complicated unit test, then it's probably good. If you can't, then it's definitely bad.

Unit-test WCF contracts match for sync / async?

WCF makes it easy to call services synchronously or asynchronously, regardless of how the service is implemented. To accommodate clients using ChannelFactory, services can even define separate sync/async contract interfaces. For example:
public interface IFooService
{
int Bar();
}
[ServiceContract(Name = "IFooService")]
public interface IAsyncFooService
{
Task<int> BarAsync();
}
This allows the client to reference either contract version, and WCF translates the actual API calls automatically.
One drawback to providing both contract versions is that they must be kept in-sync. If you forget to update one, the client may receive a contract mismatch exception at runtime.
Is there an easy way to unit test the interfaces to ensure they match from a WCF metadata perspective?
You can retrieve the ContractDescription and use WsdlExporter to generate the WSDL. The output MetadataSet is XML serializable, so you can compare the representations for each contract version to ensure they match:
[TestMethod]
public void ContractsMatch()
{
// Arrange
string expectedWsdl = this.GetContractString<IFooService>();
// Act
string actualWsdl = this.GetContractString<IAsyncFooService>();
// Assert
Assert.AreEqual(expectedWsdl, actualWsdl);
}
private string GetContractString<TContract>()
{
ContractDescription description = ContractDescription.GetContract(typeof(TContract));
WsdlExporter wsdlExporter = new WsdlExporter();
wsdlExporter.ExportContract(description);
if (wsdlExporter.Errors.Any())
{
throw new InvalidOperationException(string.Format("Failed to export WSDL: {0}", string.Join(", ", wsdlExporter.Errors.Select(e => e.Message))));
}
MetadataSet wsdlMetadata = wsdlExporter.GetGeneratedMetadata();
string contractStr;
StringBuilder stringBuilder = new StringBuilder();
using (XmlWriter xmlWriter = XmlWriter.Create(stringBuilder))
{
wsdlMetadata.WriteTo(xmlWriter);
contractStr = stringBuilder.ToString();
}
return contractStr;
}
Your own answer is great. As BartoszKP points out it is more of an integration test, but this might be the best fit. You could argue that comparing two units (interfaces) to each other is not a unit test by definition.
The advantage of your approach is that you can be sure to verify what WCF makes from your classes. If you only want to test your own code, you could do something like that:
[TestMethod]
public void ContractsMatch()
{
var asyncMethodsTransformed = typeof(IAsyncFooService)
.GetMethods()
.Select(mi => new
{
ReturnType = mi.ReturnType,
Name = mi.Name,
Parameters = mi.GetParameters()
});
var syncMethodsTransformed = typeof(IFooService)
.GetMethods()
.Select(mi => new
{
ReturnType = WrapInTask(mi.ReturnType),
Name = Asyncify(mi.Name),
Parameters = mi.GetParameters()
});
Assert.That(asyncMethodsTransformed, Is.EquivalentTo(syncMethodsTransformed));
}
The idea is that for each method in your IFooService you expect a method which has a similar signature with clearly defined transformations:
The name must contain a "Async" after the "I"
The return type must be a Task of the type found in the sync version.
The WrapInTask and Asyncify are left as exercise :-) If you like this suggestion I can expand on them.
By using a test like that you might constrain the code more than WCF does (I don't know the Async support very well). But even if it does you might want that to ensure some code consistency.

WPF Command Line Argument Routing

I have a WPF application that I'm starting to develop. I have a 40 or so methods that are accessible through the UI, but also need to be executed by passing parameters via the command line.
Currently i have the following, that allows me to catch the arguments on the App.xaml.cs...
public partial class App : Application
{
string[] args = MyApplication.GetCommandLineArgs();
Dictionary<string, string> dictionary = new Dictionary<string, string>();
private void Application_Startup(object sender, StartupEventArgs e)
{
for (int index = 1; index < args.Length; index += 2)
{
dictionary.Add(args[index], args[index + 1]);
}
if (dictionary.Keys.Contains("/Task"))
{
MessageBox.Show("There is a Task");
}
}
}
}
I am looking to pass a argument at the start of every call through the command line. If i pass
/Task ThisIsTheTask
I can read this from the dictionary. And then execute the related method.
My question is what is the best way of "routing" the task parameter to a specific method. I will also be passing additional parameters after the task that will need to be passed to the method.
It could be considered an implementation of the service-locator anti-pattern, but one simple approach would be to have something like the following:
private readonly Dictionary<string, Action<string[]>> commands = new Dictionary<string, Action[]>
{
{"Task1", args => Task1Method(args[0], Int32.Parse(args[1]))}
}
private static Task1Method(string firstArgs, int secondArg)
{
}
Your code can then locate an Action<string[]> for the task specified on the command line, and pass the remaining parameters to the Action, e.g.
var commandLineArgs = Environment.GetCommandLineArgs();
var taskName = commandLineArgs[1];
// Locate the action to execute for the task
Action<string[]> action;
if(!commands.TryGetValue(taskName, out action))
{
throw new NotSupportedException("Task not found");
}
// Pass only the remaining arguments
var actionArgs = new string[commandLineArgs.Length-2];
commandLineArgs.CopyTo(actionArgs, 2);
// Actually invoke the handler
action(actionArgs);
If you are able to use third-party, open source libraries I would suggest taking a look at ManyConsole, it is available via NuGet here.
ManyConsole allows you to define ConsoleCommand implementations (see here for an example implementation), which can have many parameters. You are then able to use a ConsoleCommandDispatcher to route to the appropriate ConsoleCommand implementation based upon the command-line arguments (see here for an example).
I am in no way affiliated with ManyConsole, but I have used the library and found it to be very effective.
I'd suggest one (or more) properties on the application class of your program that expose them. Access during runtime can then be done by using something like
(Application.Current as App).MyTask
which can then be further wrapped for convenience.
Also, you can write your own "Main" method in WPF too - that way you would have easier access to the parameters array and could do processing before WPF starts up if you need to. I'll edit in how if you need that.

Visual Studio 2012 Fakes - How do I verify a method was called?

Coming from using Moq, I'm used to being able to Setup mocks as Verifiable. As you know, this is handy when you want to ensure your code under test actually called a method on a dependency.
e.g. in Moq:
// Set up the Moq mock to be verified
mockDependency.Setup(x => x.SomethingImportantToKnow()).Verifiable("Darn, this did not get called.");
target = new ClassUnderTest(mockDependency);
// Act on the object under test, using the mock dependency
target.DoThingsThatShouldUseTheDependency();
// Verify the mock was called.
mockDependency.Verify();
I've been using VS2012's "Fakes Framework" (for lack of knowing a better name for it), which is quite slick and I'm starting to prefer it to Moq, as it seems a bit more expressive and makes Shims easy. However, I can't figure out how to reproduce behavior similar to Moq's Verifiable/Verify implementation. I found the InstanceObserver property on the Stubs, which sounds like it might be what I want, but there's no documentation as of 9/4/12, and I'm not clear how to use it, if it's even the right thing.
Can anyone point me in the right direction on doing something like Moq Verifiable/Verify with VS2012's Fakes?
-- 9/5/12 Edit --
I realized a solution to the problem, but I'd still like to know if there's a built-in way to do it with VS2012 Fakes. I'll leave this open a little while for someone to claim if they can. Here's the basic idea I have (apologies if it doesn't compile).
[TestClass]
public class ClassUnderTestTests
{
private class Arrangements
{
public ClassUnderTest Target;
public bool SomethingImportantToKnowWasCalled = false; // Create a flag!
public Arrangements()
{
var mockDependency = new Fakes.StubIDependency // Fakes sweetness.
{
SomethingImportantToKnow = () => { SomethingImportantToKnowWasCalled = true; } // Set the flag!
}
Target = new ClassUnderTest(mockDependency);
}
}
[TestMethod]
public void DoThingThatShouldUseTheDependency_Condition_Result()
{
// arrange
var arrangements = new Arrangements();
// act
arrangements.Target.DoThingThatShouldUseTheDependency();
// assert
Assert.IsTrue(arrangements.SomethingImportantToKnowWasCalled); // Voila!
}
}
-- 9/5/12 End edit --
Since I've heard no better solutions, I'm calling the edits from 9/5/12 the best approach for now.
EDIT
Found the magic article that describes best practices. http://www.peterprovost.org/blog/2012/11/29/visual-studio-2012-fakes-part-3/
Although it might make sense in complex scenarios, you don't have to use a separate (Arrangements) class to store information about methods being called. Here is a simpler way of verifying that a method was called with Fakes, which stores the information in a local variable instead of a field of a separate class. Like your example it implies that ClassUnderTest calls a method of the IDependency interface.
[TestMethod]
public void DoThingThatShouldUseTheDependency_Condition_Result()
{
// arrange
bool dependencyCalled = false;
var dependency = new Fakes.StubIDependency()
{
DoStuff = () => dependencyCalled = true;
}
var target = new ClassUnderTest(dependency);
// act
target.DoStuff();
// assert
Assert.IsTrue(dependencyCalled);
}

Client compatibility check management

I'm working on a client-server application (.NET 4, WCF) that must support backwards compatibility. In other words, old clients should be compatible with new servers and vice versa. As a result, our client code is littered with statements such as:
if (_serverVersion > new Version(2, 1, 3))
{
//show/hide something or call method Foo()...
}
else
{
//show/hide something or call method Foo2()...
}
Obviously, this becomes somewhat of a maintenance nightmare. Luckily, we're allowed to break backwards compatibility with every minor release. When we get to a point where compatibility can be broken, I'd like to clean up the code that's in the form of the example above.
My questions:
(1) Is there a way to easily identify code blocks such as these when they are no longer "valid"? My initial thoughts were to somehow conditionally apply an Obsolete attribute based on the assembly's version. When we get to a new minor version, the Obsolete attribute would "kick-in", and all of a sudden we would have several compiler warnings pointing to these code blocks... Has anyone done anything like this? Or is there a better way to manage this?
(2) I cringe every time I see hard-coded versions such as new Version(2, 1, 3). What makes things worse is that during development, we don't know the final Version that's being released, so the version checks are based on the current build number + 1 when the developer adds the check. Although this works, it's not very clean. Any ideas on how this could be improved?
Thanks!
I would suggest at least creating a method where you can do the logic like this:
public static class ServerUtilities
{
public static bool IsValidToRun(Version desiredVersion)
{
if (_serverVersion >= desiredVersion)
return true;
else if (/* your other logic to determine if they're in some acceptable range */)
return true;
return false;
}
}
Then, use like this:
if (ServerUtilities.IsValidToRun(new Version(2, 1, 3)))
{
// Do new logic
}
else
{
// Do old logic
}
If you need to centralize the versions, have a static repository of features to version mapping, so that you can call:
if (ServerUtilities.IsValidToRun(ServerFeatures.FancyFeatureRequiredVersion))
{
...
}
public static class ServerFeatures
{
public static Version FancyFeatureRequiredVersion
{
get { return new Version(2, 1, 3); }
}
}
An alternative would be to implement versioning of your service contracts: at that point you could leverage WCF's own features to ignore minor changes which do not break the client, as listed on this Versioning Strategies page.
In Figure 1 you can see that when adding new parameters to an operation signature, removing parameters from an operation signature and adding new operations the client is unaffected.
In case there are still breaking changes or your client has to support both versions (please correct me if I'm wrong since I don't know your deploying strategy), you could offer different versions of the service on different endpoints and have a WCF client factory in your client code, which could then be configured to return the client for the appropriate endpoint.
At this point you have isolated the different implementations in different clients, which is probably cleaner and less a maintenance nightmare than it is now.
Very basic sample implementation to clear things up: suppose that we have two different contracts for our service, an old one and a new one.
[ServiceContract(Name = "Service", Namespace = "http://stackoverflow.com/2012/03")]
public interface IServiceOld
{
[OperationContract]
void DoWork();
}
[ServiceContract(Name = "Service", Namespace = "http://stackoverflow.com/2012/04")]
public interface IServiceNew
{
[OperationContract]
void DoWork();
[OperationContract]
void DoAdditionalWork();
}
Note how both services have the same name but different namespaces.
Let's handle the issue of having a client that has to be able to support both the extended and new service and the old one. Let's assume that we want to call the DoAdditionalWork method when we previously just called DoWork, and that we want to handle the situation client-side, because hypothetically DoAdditionalWork could require some extra arguments from the client. Then the configuration of the service could be something like this:
<service name="ConsoleApplication1.Service">
<endpoint address="http://localhost:8732/test/new" binding="wsHttpBinding" contract="ConsoleApplication1.IServiceNew" />
<endpoint address="http://localhost:8732/test/old" binding="wsHttpBinding" contract="ConsoleApplication1.IServiceOld" />
...
</service>
Fine, we have the service side, now to the interesting part: we would like to communicate with the services using the same interface. In this case I will use the old one, but you could need to put an adapter in between. Ideally, in our client code, we would do something like this:
IServiceOld client = *Magic*
client.DoWork();
The magic in this case is a simple factory like this:
internal class ClientFactory
{
public IServiceOld GetClient()
{
string service = ConfigurationManager.AppSettings["Service"];
if(service == "Old")
return new ClientOld();
else if(service == "New")
return new ClientNew();
throw new NotImplementedException();
}
}
I delegated the decision of which client to use to the app.config, but you could insert your version check there. The implementation of ClientOld is just a regular WCF client for IServiceOld:
public class ClientOld : IServiceOld
{
private IServiceOld m_Client;
public ClientOld()
{
var factory = new ChannelFactory<IServiceOld>(new WSHttpBinding(), "http://localhost:8732/test/old");
m_Client = factory.CreateChannel();
}
public void DoWork()
{
m_Client.DoWork();
}
...
}
ClientNew instead implements the behavior we were wishing for, namely calling the DoAdditionalWork operation:
public class ClientNew : IServiceOld
{
private IServiceNew m_Client;
public ClientNew()
{
var factory = new ChannelFactory<IServiceNew>(new WSHttpBinding(), "http://localhost:8732/test/new");
m_Client = factory.CreateChannel();
}
public void DoWork()
{
m_Client.DoWork();
m_Client.DoAdditionalWork();
}
...
}
That's it, now our client can be used like in the following example:
var client = new ClientFactory().GetClient();
client.DoWork();
What have we achieved? The code using the client is abstracted from what additional work the actual WCF client has to do and the decision about which client to use is delegated to a factory. I hope some variation/expansion of this sample suits your needs.

Categories

Resources