I am making generic validators for checking input:
Interface:
public interface IInputValidator
{
bool CanHandle<T>();
bool Validate<T>(string? input, out T result);
}
Implementation:
public class IntegerValidator : IInputValidator
{
public bool CanHandle<T>()
{
return typeof(T) == typeof(int);
}
public bool Validate<T>(string? input, out T result)
{
var isValid = int.TryParse(input, out var res);
result = (T)(object)res;
return isValid;
}
}
Then I grab all the validators I have and inject like so: (it feels convenient that the interface itself is not generic thus I don't have to inject them one by one and able to group them in a single collection)
private readonly IEnumerable<IInputValidator> _inputValidators;
public CallerClass(IEnumerable<IInputValidator> inputValidators)
{
_inputValidators = inputValidators;
}
And call it like:
var validator = _inputValidators.First(r => r.CanHandle<int>());
var isInputValid = validator.Validate(userInput, out int id);
It all looks fine except for this line in implementation
result = (T)(object)res;
I feel like something is wrong here but can't figure out how to make it better. It works like this though.
The core issue is that you are trying to combine the resolution of the appropriate validator and the action of that validator into the same generic interface.
If you are willing to separate the resolver and validator functionality into two interfaces:
public interface IInputValidatorResolver
{
bool CanHandle<T>();
IInputValidator<T> GetValidator<T>();
}
public interface IInputValidator<T>
{
bool Validate(string? input, out T result);
}
you can work with IInputValidatorResolver instances in your CallerClass contract to instead resolve the appropriate validator and strongly-type you call to the validator without an object cast. The resolver implementations can create a cache that casts your Validator to a generic IInputValidator<T> instance.
public class IntegerValidatorResolver : IInputValidatorResolver
{
public bool CanHandle<T>() => typeof(T) == typeof(int);
public IInputValidator<T> GetValidator<T>() => Cache<T>.Validator;
private static class Cache<T>
{
public static readonly IInputValidator<T> Validator = BuildValidator();
private static IInputValidator<T> BuildValidator() => ((IInputValidator<T>)new IntegerValidator());
}
}
public class LongValidatorResolver : IInputValidatorResolver
{
public bool CanHandle<T>() => typeof(T) == typeof(long);
public IInputValidator<T> GetValidator<T>() => Cache<T>.Validator;
private static class Cache<T>
{
public static readonly IInputValidator<T> Validator = BuildValidator();
private static IInputValidator<T> BuildValidator() => ((IInputValidator<T>)new LongValidator());
}
}
public class IntegerValidator : IInputValidator<int>
{
public bool Validate(string? input, out int result) => int.TryParse(input, out result);
}
public class LongValidator : IInputValidator<long>
{
public bool Validate(string? input, out long result) => long.TryParse(input, out result);
}
and you can test it with the following:
IEnumerable<IInputValidatorResolver> validatorResolvers = new List<IInputValidatorResolver> { new IntegerValidatorResolver(), new LongValidatorResolver() };
var intValidator = validatorResolvers.First(x => x.CanHandle<int>()).GetValidator<int>();
var isIntValid = intValidator.Validate(long.MaxValue.ToString(), out int intResult);
Console.WriteLine(isIntValid);
Console.WriteLine(intResult);
var longValidator = validatorResolvers.First(x => x.CanHandle<long>()).GetValidator<long>();
var isLongValid = longValidator.Validate(long.MaxValue.ToString(), out long longResult);
Console.WriteLine(isLongValid);
Console.WriteLine(longResult);
That said, this creates an awkward contract where if you do NOT perform a check with CanHandle<T> first, your call to GetValidator<T> can throw an exception. In addition, in either this implementation or your current implementation, you have to loop through resolvers/validators to find the appropriate instance, which is unnecessarily wasteful.
As a result, it may make more sense to have a single IInputValidatorResolver instance that knows how to resolve the appropriate validator based on the type of T, without a CanHandle<T>() check.
public interface IInputValidatorResolver
{
IInputValidator<T> GetValidator<T>();
}
public class ValidatorResolver : IInputValidatorResolver
{
public IInputValidator<T> GetValidator<T>() => Cache<T>.Validator;
private static class Cache<T>
{
public static readonly IInputValidator<T> Validator = BuildValidator();
private static IInputValidator<T> BuildValidator()
{
if (typeof(T) == typeof(int))
{
return ((IInputValidator<T>)new IntegerValidator());
}
else if (typeof(T) == typeof(long))
{
return ((IInputValidator<T>)new LongValidator());
}
else
{
throw new ArgumentException($"{typeof(T).FullName} does not have a registered validator.");
}
}
}
}
public interface IInputValidator<T>
{
bool Validate(string? input, out T result);
}
public class IntegerValidator : IInputValidator<int>
{
public bool Validate(string? input, out int result) => int.TryParse(input, out result);
}
public class LongValidator : IInputValidator<long>
{
public bool Validate(string? input, out long result) => long.TryParse(input, out result);
}
This allows for a much cleaner API and far less registration and enumeration:
IInputValidatorResolver resolver = new ValidatorResolver();
var intValidator = resolver.GetValidator<int>();
var isIntValid = intValidator.Validate(long.MaxValue.ToString(), out int intResult);
Console.WriteLine(isIntValid);
Console.WriteLine(intResult);
var longValidator = resolver.GetValidator<long>();
var isLongValid = longValidator.Validate(long.MaxValue.ToString(), out long longResult);
Console.WriteLine(isLongValid);
Console.WriteLine(longResult);
UPDATE
It looks like you want a purely constructor-injection driven solution. You can accomplish this by registering a new interface as IEnumerable<IInputValidator> and injecting it into the resolver instance.
The IInputValidator interface is responsible for the CanHandle<T>() method that is checked prior to casting to IInputValidator<T>.
public interface IInputValidatorResolver
{
IInputValidator<T> GetValidator<T>();
}
public class ValidatorResolver : IInputValidatorResolver
{
private IEnumerable<IInputValidator?> _validators;
public ValidatorResolver(IEnumerable<IInputValidator?> validators)
{
_validators = validators;
}
public IInputValidator<T> GetValidator<T>()
{
foreach (var validator in _validators)
{
if (validator!.CanHandle<T>())
{
return (IInputValidator<T>)validator;
}
}
throw new ArgumentException($"{typeof(T).FullName} does not have a registered validator.");
}
}
public interface IInputValidator
{
bool CanHandle<TInput>();
}
public interface IInputValidator<T> : IInputValidator
{
bool Validate(string? input, out T result);
}
public class IntegerValidator : IInputValidator<int>
{
public bool CanHandle<T>() => typeof(T) == typeof(int);
public bool Validate(string? input, out int result) => int.TryParse(input, out result);
}
public class LongValidator : IInputValidator<long>
{
public bool CanHandle<T>() => typeof(T) == typeof(long);
public bool Validate(string? input, out long result) => long.TryParse(input, out result);
}
You can test this behavior with the following:
var servicesCollection = new ServiceCollection();
servicesCollection.AddTransient(typeof(IEnumerable<IInputValidator>), s =>
{
return new List<IInputValidator>
{
new IntegerValidator(),
new LongValidator()
};
});
servicesCollection.AddTransient<IInputValidatorResolver, ValidatorResolver>();
var serviceProvider = servicesCollection.BuildServiceProvider();
var resolver = serviceProvider.GetService<IInputValidatorResolver>();
var intValidator = resolver.GetValidator<int>();
var isIntValid = intValidator.Validate(long.MaxValue.ToString(), out int intResult);
Console.WriteLine(isIntValid);
Console.WriteLine(intResult);
var longValidator = resolver.GetValidator<long>();
var isLongValid = longValidator.Validate(long.MaxValue.ToString(), out long longResult);
Console.WriteLine(isLongValid);
Console.WriteLine(longResult);
With this approach, you no longer need the ValidationResolver. It is simply a way to encapsulate the logic of GetValidator<T>. You could just as easily inject IEnumerable<IInputValidator> into your consuming classes and perform the cast each time you need to use it.
Another option is to use Autofac's IComponentContext:
using Autofac;
using TransactionStorage.Interface;
namespace TransactionStorage.Core
{
public class InputResolver : IInputResolver
{
private readonly IComponentContext _context;
public InputResolver (IComponentContext context)
{
_context = context;
}
public bool Validate<T>(string? userInput, out T result) where T : struct
{
var validator = _context.Resolve<IInputValidator<T>>();
return validator.Validate(userInput, out result);
}
}
}
Related
I've built a simple extensible computation framework, where each class represents a different function for the framework.
This is a quick example of what I did:
BaseFunction:
namespace MyNamespace
{
public abstract class BaseFunction
{
public abstract string Name { get; }
public abstract int Index { get; }
public long Execute()
{
Execute(ReadInput() /* out of scope */, out long result);
return result;
}
internal abstract void Execute(string input, out long rResult);
}
}
SampleFunction:
namespace MyNamespace.Code
{
public class SampleFunction: BaseFunction
{
public override string Name => "Sample Function";
public override int Index => 1;
internal override void Execute(string input, out long result)
{
result = 0;
}
}
}
Using reflection, the framework also provided a CLI where the user can select its function and run it.
This is how all the functions are retrieved:
public static IEnumerable<BaseFunction> Functions()
{
return GetTypesInNamespace(Assembly.GetExecutingAssembly(), "MyNamespace.Code")
.Where(type => type.Name != "BaseFunction")
.Select(type => (BaseFunction)Activator.CreateInstance(type))
.OrderBy(type => type.Index);
}
and this is how the CLI is built:
var menu = new EasyConsole.Menu();
foreach (var day in FunctionsUtils.Functions())
{
menu.Add(function.Name, () => function.Execute());
}
The framework works fine, but, as you can see, everything is a long now, and this takes us to my issue: I'd like to make the BaseFunction class generic, so that I can have different functions returning different type of values.
However, changing BaseFunction to BaseFunction<TResult> breaks the Functions method as I can't return a IEnumerable<BaseFunction>.
The logical next step is to add an interface, make BaseFunction implement the interface and add the generics to BaseFunction. This means that Functions can now return a IEnumerable<IBaseFunction>.
What still doesn't work, however, is the way I build the CLI menu: my interface must have the Execute method, and so we're back to square one: I can't add that method to my interface, because the return type is a generic and the interface doesn't have the generic reference.
Here I'm kind of stuck.
Is there any way to make this kind of framework work without changing all my return types to object (or maybe struct?) considering that I may also need to return non-numeric types?
Assuming that input and result can be anything, you need something like this:
public abstract class BaseFunction
{
public abstract string Name { get; }
public abstract int Index { get; }
public object Execute() => Execute(ReadInput());
private object ReadInput()
{
// out of scope
return null;
}
protected abstract object Execute(object input);
}
public abstract class BaseFunction<TInput, TResult> : BaseFunction
{
protected sealed override object Execute(object input) => Execute(ConvertInput(input));
protected abstract TInput ConvertInput(object input);
protected abstract TResult Execute(TInput input);
}
public sealed class SampleFunction : BaseFunction<string, long>
{
public override string Name => "Returns string length";
public override int Index => 0;
protected override string ConvertInput(object input) => (string)input;
protected override long Execute(string input) => input.Length;
}
This still allows you to combine functions into IEnumerable<BaseFunction>, execute them, but also allows to work with strongly-typed input and result, when implementing particular function.
(I've modified BaseFunction a little to throw away out parameter)
If you change BaseFunction to BaseFunction<TResult>
public abstract class BaseFunction<TResult>
Why not then just change the signature of Functions to return BaseFunction <TResult> ?
public static class FunctionClass<TResult>
{
public static IEnumerable<BaseFunction<TResult>> Functions()
{
Update:
Extracting a base interface to find commonality, then setting up TResult in the abstract class seems to work. I also refined the linq query a bit.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Reflection;
namespace MyNamespace
{
class Program
{
static void Main(string[] args)
{
Console.WriteLine("Hello World!");
var list = FunctionClass.Functions();
foreach(var item in list)
{
Console.WriteLine($"{item.Name} - {item.GetType().Name}");
if(item is BaseFunction<int>)
{
Console.WriteLine($"int result {((BaseFunction<int>)item).Execute()}");
}
if (item is BaseFunction<long>)
{
Console.WriteLine($"long result {((BaseFunction<long>)item).Execute()}");
}
}
Console.WriteLine("\n\nPress Any Key to Close");
Console.ReadKey();
}
}
public class FunctionClass
{
private static Type[] GetTypesInNamespace(Assembly assembly, string nameSpace)
{
return
assembly.GetTypes()
.Where(t => String.Equals(t.Namespace, nameSpace, StringComparison.Ordinal))
.ToArray();
}
public static IEnumerable<IBaseFunction> Functions()
{
return GetTypesInNamespace(Assembly.GetExecutingAssembly(), "MyNamespace.Code")
.Where(type => type.IsClass && typeof(IBaseFunction).IsAssignableFrom(type))
.Select(type => (IBaseFunction)Activator.CreateInstance(type))
.OrderBy(type => ((IBaseFunction)type).Index);
}
}
}
namespace MyNamespace
{
public interface IBaseFunction
{
public string Name { get; }
public long Index { get; }
}
public abstract class BaseFunction<TResult> : IBaseFunction
{
public virtual string Name { get; }
public virtual long Index { get; }
public TResult Execute()
{
Execute("foo" /* out of scope */, out TResult result);
return result;
}
internal abstract void Execute(string input, out TResult rResult);
}
}
namespace MyNamespace.Code
{
public class SampleFunction : BaseFunction<int>
{
public override string Name => "Sample Function1 - with int";
public override long Index => 1;
internal override void Execute(string input, out int rResult)
{
rResult = 0;
}
}
public class SampleFunction2 : BaseFunction<long>
{
public override string Name => "Sample Function2 - with long";
public override long Index => 1;
internal override void Execute(string input, out long result)
{
result = 0;
}
}
}
I am building my personal automation framework with fluent syntax. Here is a pipeline example
var pipeline = Core.Runner.CreatePipeline()
.BeginMany(Sources.Csv(#"..."))
.ThenTransform<dynamic,string[]>(record=> record.id.Split(":"))
.ThenTransform<string[], (string, string)>(record => (record[0], record[1]))
.ThenTransform<(string, string), dynamic>(new {...})
I wonder is there any way to improve usability and automatically set TInput equal to TOutput for the next ThenTransform<TInput, TOutput> in the chain and validate types on build?
Desired outcome
var pipeline = Core.Runner.CreatePipeline()
.BeginMany(Sources.Csv(#"..."))
.ThenTransform<string[]>(record=> record.id.Split(":")) // TInput is dynamic, TOutput is string[]
.ThenTransform<(string, string)>(record => (record[0], record[1])) // TInput is string[], TOuput is (string,string)
.ThenTransform<dynamic>(new {...}) // etc
an even better outcome which might be possible because lambda knows return type
var pipeline = Core.Runner.CreatePipeline()
.BeginMany(Sources.Csv(#"..."))
.ThenTransform(record=> record.id.Split(":")) // TInput is dynamic, TOutput is string[]
.ThenTransform(record => (record[0], record[1])) // TInput is string[], TOuput is (string,string)
.ThenTransform(new {...}) // etc
You have not specified how you are storing state here, but for the generics you can do something like this:
using System;
namespace ConsoleApp16
{
class Program
{
static void Main(string[] args)
{
var pipeline = Core.Runner.CreatePipeline<dynamic>()
.BeginMany(Sources.Csv(#"..."))
// Type cannot be inferred from dynamic
.ThenTransform<string[]>(record => record.id.Split(":"))
.ThenTransform(record => (record[0], record[1]))
.ThenTransform(s => s.Item1);
}
}
internal class Sources
{
public static object Csv(string s)
{
return new object();
}
}
internal class Core
{
public class Runner
{
public static Pipeline<TInput> CreatePipeline<TInput>()
{
return new Pipeline<TInput>(new PipelineState());
}
}
}
internal class PipelineState
{
public bool MyState { get; set; }
}
internal class Pipeline<TInput>
{
private readonly PipelineState _pipelineState;
public Pipeline(PipelineState pipelineState)
{
_pipelineState = pipelineState;
}
public Pipeline<TInput> BeginMany(object csv)
{
// Update state
return this;
}
public Pipeline<TOutput> ThenTransform<TOutput>(Func<TInput, TOutput> func)
{
// Update state
return new Pipeline<TOutput>(_pipelineState);
}
}
}
You can probably improve upon this by having different PipelineBuilder classes that are returned by different methods. For instance BeginMany might return a class that has the ThenTransform method so that the order is enforced:
using System;
namespace ConsoleApp16
{
class Program
{
static void Main(string[] args)
{
var pipeline = Core.Runner.CreatePipeline()
.BeginMany(Sources.Csv(#"..."))
// Type cannot be inferred from dynamic
.ThenTransform<string[]>(record => record.id.Split(":"))
.ThenTransform(record => (record[0], record[1]))
.ThenTransform(s => s.Item1)
.Build();
}
}
internal class Sources
{
public static Source<dynamic> Csv(string s)
{
return new Source<dynamic>();
}
}
internal class Source<T>
{
}
internal class Core
{
public class Runner
{
public static PipelineBuilder CreatePipeline()
{
return new PipelineBuilder(new PipelineState());
}
}
}
internal class PipelineState
{
public bool MyState { get; set; }
}
internal class PipelineBuilder
{
protected readonly PipelineState State;
public PipelineBuilder(PipelineState state)
{
State = state;
}
public PipelineBuilder<TInput> BeginMany<TInput>(Source<TInput> source)
{
// Update state
return new PipelineBuilder<TInput>(State);
}
public Pipeline Build()
{
// Populate from state
return new Pipeline();
}
}
internal class PipelineBuilder<TInput> : PipelineBuilder
{
public PipelineBuilder(PipelineState pipelineState) : base(pipelineState)
{
}
public PipelineBuilder<TOutput> ThenTransform<TOutput>(Func<TInput, TOutput> func)
{
// Update state
return new PipelineBuilder<TOutput>(State);
}
}
internal class Pipeline
{
}
}
It's worth looking into the builder pattern, when combined with interfaces and extension methods it can be pretty powerful. One great example is Microsoft.Extensions.Configuration https://github.com/dotnet/extensions/tree/release/3.1/src/Configuration
I am looking for a way to cast object variable into type with generic type argument specified by other variable of type Type.
I am limited to .NET 3.5, so no dynamic can be used :(
Main idea here is that I have access to a dictionary:
Dictionary<Type, object> data;
Data to that dictionary is added only in form of:
data.Add(T, new DataSub<T>(someValueOfTypeT));
The problem is, that when I'm trying to reverse the process:
foreach(var dataType in data.Keys) {
var dataValue = data[dataType];
ProcessDataValue(dataType, dataValue);
}
Now the question is how do I manage to cast object to DataSub?
Simplified DataSub.cs:
public class DataSub<T>
{
private T _cache;
public T Value {
get { return _cache; }
set { _cache = value; }
}
}
How it could work in ProcessDataValue:
public void ProcessDataValue(Type dataType, object dataValue)
{
var data = dataValue as DataSub<dataType>;
if (data == null) return;
AddProcessedDataValue(dataType, data.Value.ToString());
}
if you can do minimal changes to the classes you posted and if - as is showed in your example - what you would do with DataSub.Value is invoking ToString, may be you can obtain the result you need with
public interface IDataSub {
bool MatchesType(Type t);
object GetValue();
}
public class DataSub<T> : IDataSub {
private T _cache;
public T Value {
get { return _cache; }
set { _cache = value; }
}
public bool MatchesType(Type t) {
return typeof(T) == t; // or something similar, in order to handle inheritance
}
public object GetValue() {
return Value;
}
}
public class Client {
Dictionary<Type, IDataSub> data = new Dictionary<Type, IDataSub>() ;
public void AddData<T>(T someValueOfTypeT) {
data.Add(typeof(T), new DataSub<T> { Value = someValueOfTypeT });
}
public void UseData() {
foreach(var dataType in data.Keys) {
var dataValue = data[dataType];
ProcessDataValue(dataType, dataValue);
}
}
public void ProcessDataValue(Type dataType, IDataSub dataValue)
{
if(dataValue.MatchesType(dataType))
AddProcessedDataValue(dataType, dataValue.GetValue().ToString());
}
}
If the usage of DataSub.Value.ToString is only an example, and in the real world you need to access DataSub.Value using its type T, you should apply a broader reworking of you code.
What do you think about the following approach? This is an application of the pattern I like call set of responsibility (I wrote the linked post about this topic), a variation of GoF's chain of responsibility:
public interface IDataSub {
object GetValue();
}
public class DataSub<T> : IDataSub {
private T _cache;
public T Value {
get { return _cache; }
set { _cache = value; }
}
public object GetValue() {
return Value;
}
}
public interface IDataHandler {
bool CanHandle(Type type);
void Handle(object data);
}
public class Client {
private readonly Dictionary<Type, IDataSub> data = new Dictionary<Type, IDataSub>();
private readonly IList<IDataHandler> handlers = new List<IDataHandler>();
public void AddData<T>(T someValueOfTypeT) {
data.Add(typeof(T), new DataSub<T> { Value = someValueOfTypeT });
}
public void RegisterHandler(IDataHandler handler) {
handlers.Add(handler);
}
public void UseData() {
foreach(var dataType in data.Keys) {
handlers.FirstOrDefault(h => h.CanHandle(dataType))?.Handle(data[dataType].GetValue());
}
}
// Lambda-free version
// public void UseData() {
// foreach(var dataType in data.Keys) {
// for (int i = 0; i < handlers.Count; i++) {
// if (handlers[i].CanHandle(dataType)) {
// handlers[i].Handle(data[dataType].GetValue());
// break; // I don't like breaks very much...
// }
// }
// }
// }
}
class StringDataHandler : IDataHandler {
public bool CanHandle(Type type) {
// Your logic to check if this handler implements logic applyable to instances of type
return typeof(string) == type;
}
public void Handle(object data) {
string value = (string) data;
// Do something with string
}
}
class IntDataHandler : IDataHandler {
public bool CanHandle(Type type) {
// Your logic to check if this handler implements logic applyable to instances of type
return typeof(int) == type;
}
public void Handle(object data) {
int value = (int) data;
// Do something with int
}
}
This approach allow you to decouple data storage and data iteration logic from data-handling logic specific of different data-types: IDataHandler's implementations known what type of data they can handle and cast generic object reference to desired type. If you prefer, you can merge CanHandle method into Handle method, remving the former method and changing UseData to
public void UseData() {
foreach(var dataType in data.Keys) {
foreach(var handler in handlers) {
handler.Handle(dataType, data[dataType].GetValue())
}
}
}
and handler implementations to
class IntDataHandler : IDataHandler {
public void Handle(Type dataType, object data) {
if(typeof(int) == type) {
int value = (int) data;
// Do something with int
}
}
}
This variant is slightly more type-safe, because in the first variant was already possibile to call Handle method without a previus call to CanHandle.
If you liked this approach, you can bring it forward, simplifying your data structure and converting data from IDictionary to IList:
public interface IDataSub {
object GetValue();
}
public class DataSub<T> : IDataSub {
private T _cache;
public T Value {
get { return _cache; }
set { _cache = value; }
}
public object GetValue() {
return Value;
}
}
public interface IDataHandler {
bool CanHandle(object data);
void Handle(object data);
}
public class Client {
private readonly IList<IDataSub> data = new List<IDataSub>();
private readonly IList<IDataHandler> handlers = new List<IDataHandler>();
public void AddData<T>(T someValueOfTypeT) {
data.Add(new DataSub<T> { Value = someValueOfTypeT });
}
public void RegisterHandler(IDataHandler handler) {
handlers.Add(handler);
}
public void UseData() {
foreach(var dataItem in data) {
var value = dataItem.GetValue();
handlers.FirstOrDefault(h => h.CanHandle(value))?.Handle(value);
}
}
// Lambda-free version as above...
class StringDataHandler : IDataHandler {
public bool CanHandle(object data) {
// Your logic to check if this handler implements logic applyable to instances of String
return data is string;
}
public void Handle(object data) {
string value = (string) data;
// Do something with string
}
}
class IntDataHandler : IDataHandler {
public bool CanHandle(Type type) {
// Your logic to check if this handler implements logic applyable to instances of int
return type is int;
}
public void Handle(object data) {
int value = (int) data;
// Do something with int
}
}
The CanHandle-free variant can simplify IDataHandler interface and its implementation in this case, too...
I hope my answer can help you resolving you design scenario; I build it upon an approach I like very much, because it allows to apply subtype-specific logic to instances of different classe, given they share a common superclass (as object in my code samples).
So I am using Simple.Mocking to Mock some interfaces on my tests. Some methods receive custom objects
public class MyObj
{
public int Attr { get; set; }
public override bool Equals(object obj)
{
return Equals(obj as MyObj);
}
public override int GetHashCode()
{
return Attr;
}
private bool Equals(MyObj myObj)
{
return Attr == myObj.Attr;
}
}
public interface IFoo
{
void Show(MyObj o);
}
public class ObjUnderTest
{
public ObjUnderTest(IFoo foo)
{
var o = new MyObj{ Attr = 1; };
foo.Show(o);
}
}
[TestClass]
public class TestClasse
{
[TestMethod]
public void TestShow()
{
var foo = Mock.Interface<IFoo>();
var myObj = new MyObj { Attr = 1 };
Expect.Once.MethodCall(() => foo.Show(myObj));
var objectUnderTest = new ObjUnderTest(foo);
AssertExpectations.IsMetFor(foo);
}
}
The problems is that test fails always, even when Show is called with a object with Attrequals to 1. It only pass if I write the expect like this:
Expect.Once.MethodCall(()=> foo.Show(Any<MyObj>.Value));
Which is not what I need. I know it fails because those are different objects but I have tried overriding MyObj Equals and GetHashCode with no success.
Any Ideas?
If the desired outcome is to validate the input you can try specifying exptectation with a predicate
Expect.Once.MethodCall(()=> foo.Show(Any<MyObj>.Value.Matching(obj => obj.Attr == 1)));
Source: project readme on Github - Using "wildcard" parameter values
[TestClass]
public class TestClasse {
[TestMethod]
public void TestShow() {
//Arrange
var foo = Mock.Interface<IFoo>();
Expect.Once.MethodCall(()=> foo.Show(Any<MyObj>.Value.Matching(obj => obj.Attr == 1)));
//Act
var objectUnderTest = new ObjUnderTest(foo);
//Assert
AssertExpectations.IsMetFor(foo);
}
}
I needed to break up a WCF service contract that had a massive interface and clientbase class into smaller classes. All of the smaller classes are similar but have different operation contracts. I want to be able to expose the operation contract methods in all the new sub-classes as a single class for backwards compatibility. Ideally it would look something like this:
public class MainClient {
public MainClient() {
Sub1 = new Sub1Client();
Sub2 = new Sub2Client();
}
public static Sub1Client Sub1;
public static Sub2Client Sub2;
}
I would then want to be able to call methods from Sub1 and Sub2 as if those methods were defined in MainClient. So instead of calling (new MainClient()).Sub1.Method1() I would call (new MainClient()).Method1() where Method1 still exists in the Sub1Client class.
Is this possible?
I not sure that clearly understand your question, but check this solution:
public interface IFirst
{
void Method1(string a);
}
public interface ISecond
{
double Method2(int b, bool a);
}
public interface IComplex : IFirst, ISecond
{
}
public class MyException : Exception
{
public MyException(string message) : base(message)
{
}
}
public class Sub1Client : IFirst
{
public void Method1(string a)
{
Console.WriteLine("IFirst.Method1");
Console.WriteLine(a);
}
}
public class Sub2Client : ISecond
{
public double Method2(int b, bool a)
{
Console.WriteLine("ISecond.Method2");
return a ? b : -b;
}
}
public class MainClient : IComplex
{
public MainClient()
{
Sub1 = new Sub1Client();
Sub2 = new Sub2Client();
}
public static Sub1Client Sub1;
public static Sub2Client Sub2;
private T FindAndInvoke<T>(string methodName, params object[] args)
{
foreach(var field in this.GetType().GetFields(BindingFlags.Public | BindingFlags.Static))
{
var method = field.FieldType.GetMethod(methodName);
if(method != null)
return (T)method.Invoke(field.GetValue(this), args);
}
throw new MyException("Method was not found!");
}
public void Method1(string a)
{
FindAndInvoke<object>(MethodBase.GetCurrentMethod().Name, a);
}
public double Method2(int b, bool a)
{
return FindAndInvoke<double>(MethodBase.GetCurrentMethod().Name, b, a);
}
}
public static void Main()
{
var test = new MainClient();
test.Method1("test");
Console.WriteLine(test.Method2(2, true));
}