Does anyone know of a way to intercept dynamic method calls (particularly those that are going to raise RuntimeBinderExceptions) with a RealProxy? I was hoping to catch the exception and implement 'method missing' on top of that, but it appears to be thrown before the interceptor gets a look-in.
My test just looks like:
dynamic hello = MethodMissingInterceptor<DynamicObject>.Create();
Assert.AreEqual("World", hello.World());
Where World isn't actually implemented on DynamicObject. The interceptor is pretty straightforward - I was hoping to check IMethodReturnMessage.Exception for RuntimeBinderException and forward on to something like:
public IMessage MethodMissing(IMethodCallMessage call)
{
return new ReturnMessage(call.MethodBase.Name, new object[0], 0, call.LogicalCallContext, call);
}
Unfortunately, all I see in my interceptor are some calls to GetType, and not the non-existant World method.
Failing that - does anyone know if there's a DynamicProxy version running happily on .NET 4.0 yet that might have tackled the problem?
I'll start with the long answer. Every bind of a dynamic operation in C# does approximately these three things in this order:
Ask the object to bind itself if it implements IDynamicMetaObjectProvider or is a COM object, and if that fails, then...
Bind the operation to an operation on a plain-old-clr-object using reflection, and if that fails, then...
Return a DynamicMetaObject that represents a total failure to bind.
You're seeing the GetType calls because in step 2, the C# runtime binder is reflecting over you to try to figure out if you have a "World" method that is appropriate to call, and this is happening because the IDynamicMetaObjectProvider implementation of hello, if there is one, couldn't come up with anything special to do.
Unfortunately for you, by the time the RuntimeBinderException is thrown, we are no longer binding. The exception comes out of the execution phase of the dynamic operation, in response to the meta object returned due to step 3. The only opportunity for you to catch it is at the actual call site.
So that strategy isn't going to work out for you if you want to implement method_missing in C#. You do have some options though.
One easy option is to implement IDynamicMetaObjectProvider in your MethodMissingInterceptor, and defer to the IDMOP implementation of the wrapped object. In case of failure on the part of the inner IDMOP, you can bind to whatever you want (perhaps a call to a method_missing delegate stored in the interceptor). The downside here is that this only works for objects that are known to be dynamic objects, e.g. those that implement IDMOP to begin with. This is because you are basically inserting yourself between steps 1 and 2.
Another alternative I can think of is to implement IDynamicMetaObjectProvider, and in it, respond positively to every bind, returning a call to a method that (a) produces the same code the C# compiler would have produced to bind in the first place, and (b) catches RuntimeBinderException to call a method_missing method. The downside here is that it would be quite complicated--you'd need to generate arbitrary delegate types and the IL that uses them, against the public types in the C# runtime binder assembly which, frankly, are not meant for public consumption. But at least you'd get method missing against all operations.
I am sure there are other strategies I have not thought of, such as you seem to be hinting at about using remoting proxies. I can't imagine what they look like though and I can't say if they'd be successful.
The crux of the problem here is that C# 4.0 does not have a design that anticipates your desire to do this. Specifically, you cannot easily insert yourself between steps 2 and 3. That brings me to the short answer, which is sorry, C# 4.0 does not have method_missing.
Related
Often times a developer on my team writes code in a loop that makes a call that is relatively slow (i.e. database access or web service call or other slow method). This is a super common mistake.
Yes, we practice code reviews, and we try to catch these and fix them before merging. However, failing early is better, right?
So is there a way to catch this mistake via the compiler?
Example:
Imagine this method
public ReturnObject SlowMethod(Something thing)
{
// method work
}
Below the method is called in a loop, which is a mistake.
public ReturnObject Call(IEnumerable<Something> things)
{
foreach(var thing in Things)
SlowMethod(thing); // Should throw compiler error or warning in a loop
}
Is there any way to decorate the above SlowMethod() with an attribute or compiler statement so that it would complain if used in a loop?
No, there is nothing in regular C# to prevent a method being used in a loop.
Your options:
discourage usage in a loop by providing easier to use alternatives. Providing second (or only) method that deals with collections will likely discourage one from writing calls in a loop enough so it is no longer a major concern.
try to write your own code analysis rule (stating tutorial - https://learn.microsoft.com/en-us/dotnet/csharp/roslyn-sdk/tutorials/how-to-write-csharp-analyzer-code-fix)
add run-time protection to the method if it is called more often than you'd like.
Obviously it makes sense to invoke those slow methods in a loop - you're trying to put work into preventing that, but that's putting work into something fundamentally negative. Why not do something positive instead? Obviously, you've provided an API that's convenient to use in a loop. So, provide some alternatives that are easier to use correctly where formerly an incorrect use in a loop would take place, like:
an iterable-based API that would make the loop implicit, to remove some of the latency since you'd have a full view of what will be iterated, and can hide the latency appropriately,
an async API that won't block the thread, with example code showing how to use it in the typical situations you've encountered thus far; remember that an API that's too hard to use correctly won't get used!
a lowest-common-denominator API: split the methods into a requester and a result provider, so that there'd naturally be two loops: one to submit all the requests, another to collect and process the results (I dislike this approach, since it doesn't make the code any nicer)
I am a student and I am currently preparing for my OOP Basics Exam.
When in the controller you have methods which return a value and such that are void - how do you invoke them without using a if-else statement?
In my code "status" is the only one which should return a string to be printed on the Console - the others are void. So I put a if-esle and 2 methods in the CommandHandler.
Since I know "if-else" is a code smell, is there a more High Quality approach to deal with the situation?
if (commandName == "status")
{
this.Writer.WriteLine(this.CommandHandler.ExecuteStatusCommand(commandName));
}
else
{
this.CommandHandler.ExecuteCommand(commandName, commandParameters);
}
This is the project.
Thank you very much.
First, don't worry about if/else. If anybody tells you if/else is a code smell, put it through the Translator: What comes out is he's telling you he's too crazy, clueless, and/or fanatical to be taken seriously.
If by ill chance you get an instructor who requires you to say the Earth is flat to get an A, sure, tell him the Earth is flat. But if you're planning on a career or even a hobby as a navigator, don't ever forget that it's actually round.
So. It sounds to me like CommandHandler.ExecuteStatusCommand() executes the named command, which is implemented as a method somewhere. If the command method is void, ExecuteStatusCommand() returns null. Otherwise, the command method may return a string, in which case you want to write it to what looks like a stream.
OK, so one approach here is to say "A command is implemented via a method that takes a parameter and returns either null or a string representing a status. If it returns anything but null, write that to the stream".
This is standard stuff: You're defining a "contract". It's not at all inappropriate for command methods which actually return nothing to have a String return type, because they're fulfilling the terms of contract. "Return a string" is an option that's open to all commands; some take advantage, some don't.
This allows knowledge of the command's internals to be limited to the command method itself, which is a huge advantage. You don't need to worry about special cases at the point where you call the methods. The code below doesn't need to know which commands return a status and which don't. The commands themselves are given a means to communicate that information back to the caller, so only they need to know. It's incredibly beneficial to have a design which allows different parts of your code not to care about the details of other parts. Clean "interfaces" like this make that possible. The calling code gets simpler and stays simpler. Less code, with less need to change it over time, means less effort and fewer bugs.
As you noted, if you've got a "status" command that prints a result, and then later on you add a "print" command that also prints a result, you've got to not only implement the print command itself, but you've also got to remember to return to this part of your code and add a special case branch to the if/else.
That kind of tedious error-prone PITA is exactly the kind of nonsense OOP is meant to eliminate. If a new feature can be added without making a single edit to existing code, that's a sort of Platonic ideal of OOP.
So if ExecuteCommand() returns void, we'll want to be calling ExecuteStatusCommand() instead. I'm guessing at some things here. It would have been helpful if you had sketched out the semantics of those two methods.
var result = this.CommandHandler.ExecuteCommand(commandName, commandParameters);
if (result != null)
{
this.Writer.WriteLine(result);
}
If my assumptions about your design are accurate, that's the whole deal. commandParameters, like the status result, are an optional part of the contract. There's nothing inherently wrong with if/else, but sometimes you don't need one.
I'm writing a library that has several public classes and methods, as well as several private or internal classes and methods that the library itself uses.
In the public methods I have a null check and a throw like this:
public int DoSomething(int number)
{
if (number == null)
{
throw new ArgumentNullException(nameof(number));
}
}
But then this got me thinking, to what level should I be adding parameter null checks to methods? Do I also start adding them to private methods? Should I only do it for public methods?
Ultimately, there isn't a uniform consensus on this. So instead of giving a yes or no answer, I'll try to list the considerations for making this decision:
Null checks bloat your code. If your procedures are concise, the null guards at the beginning of them may form a significant part of the overall size of the procedure, without expressing the purpose or behaviour of that procedure.
Null checks expressively state a precondition. If a method is going to fail when one of the values is null, having a null check at the top is a good way to demonstrate this to a casual reader without them having to hunt for where it's dereferenced. To improve this, people often use helper methods with names like Guard.AgainstNull, instead of having to write the check each time.
Checks in private methods are untestable. By introducing a branch in your code which you have no way of fully traversing, you make it impossible to fully test that method. This conflicts with the point of view that tests document the behaviour of a class, and that that class's code exists to provide that behaviour.
The severity of letting a null through depends on the situation. Often, if a null does get into the method, it'll be dereferenced a few lines later and you'll get a NullReferenceException. This really isn't much less clear than throwing an ArgumentNullException. On the other hand, if that reference is passed around quite a bit before being dereferenced, or if throwing an NRE will leave things in a messy state, then throwing early is much more important.
Some libraries, like .NET's Code Contracts, allow a degree of static analysis, which can add an extra benefit to your checks.
If you're working on a project with others, there may be existing team or project standards covering this.
If you're not a library developer, don't be defensive in your code
Write unit tests instead
In fact, even if you're developing a library, throwing is most of the time: BAD
1. Testing null on int must never be done in c# :
It raises a warning CS4072, because it's always false.
2. Throwing an Exception means it's exceptional: abnormal and rare.
It should never raise in production code. Especially because exception stack trace traversal can be a cpu intensive task. And you'll never be sure where the exception will be caught, if it's caught and logged or just simply silently ignored (after killing one of your background thread) because you don't control the user code. There is no "checked exception" in c# (like in java) which means you never know - if it's not well documented - what exceptions a given method could raise. By the way, that kind of documentation must be kept in sync with the code which is not always easy to do (increase maintenance costs).
3. Exceptions increases maintenance costs.
As exceptions are thrown at runtime and under certain conditions, they could be detected really late in the development process. As you may already know, the later an error is detected in the development process, the more expensive the fix will be. I've even seen exception raising code made its way to production code and not raise for a week, only for raising every day hereafter (killing the production. oops!).
4. Throwing on invalid input means you don't control input.
It's the case for public methods of libraries. However if you can check it at compile time with another type (for example a non nullable type like int) then it's the way to go. And of course, as they are public, it's their responsibility to check for input.
Imagine the user who uses what he thinks as valid data and then by a side effect, a method deep in the stack trace trows a ArgumentNullException.
What will be his reaction?
How can he cope with that?
Will it be easy for you to provide an explanation message ?
5. Private and internal methods should never ever throw exceptions related to their input.
You may throw exceptions in your code because an external component (maybe Database, a file or else) is misbehaving and you can't guarantee that your library will continue to run correctly in its current state.
Making a method public doesn't mean that it should (only that it can) be called from outside of your library (Look at Public versus Published from Martin Fowler). Use IOC, interfaces, factories and publish only what's needed by the user, while making the whole library classes available for unit testing. (Or you can use the InternalsVisibleTo mechanism).
6. Throwing exceptions without any explanation message is making fun of the user
No need to remind what feelings one can have when a tool is broken, without having any clue on how to fix it. Yes, I know. You comes to SO and ask a question...
7. Invalid input means it breaks your code
If your code can produce a valid output with the value then it's not invalid and your code should manage it. Add a unit test to test this value.
8. Think in user terms:
Do you like when a library you use throws exceptions for smashing your face ? Like: "Hey, it's invalid, you should have known that!"
Even if from your point of view - with your knowledge of the library internals, the input is invalid, how you can explain it to the user (be kind and polite):
Clear documentation (in Xml doc and an architecture summary may help).
Publish the xml doc with the library.
Clear error explanation in the exception if any.
Give the choice :
Look at Dictionary class, what do you prefer? what call do you think is the fastest ? What call can raises exception ?
Dictionary<string, string> dictionary = new Dictionary<string, string>();
string res;
dictionary.TryGetValue("key", out res);
or
var other = dictionary["key"];
9. Why not using Code Contracts ?
It's an elegant way to avoid the ugly if then throw and isolate the contract from the implementation, permitting to reuse the contract for different implementations at the same time. You can even publish the contract to your library user to further explain him how to use the library.
As a conclusion, even if you can easily use throw, even if you can experience exceptions raising when you use .Net Framework, that doesn't mean it could be used without caution.
Here are my opinions:
General Cases
Generally speaking, it is better to check for any invalid inputs before you process them in a method for robustness reason - be it private, protected, internal, protected internal, or public methods. Although there are some performance costs paid for this approach, in most cases, this is worth doing rather than paying more time to debug and to patch the codes later.
Strictly Speaking, however...
Strictly speaking, however, it is not always needed to do so. Some methods, usually private ones, can be left without any input checking provided that you have full guarantee that there isn't single call for the method with invalid inputs. This may give you some performance benefit, especially if the method is called frequently to do some basic computation/action. For such cases, doing checking for input validity may impair the performance significantly.
Public Methods
Now the public method is trickier. This is because, more strictly speaking, although the access modifier alone can tell who can use the methods, it cannot tell who will use the methods. More over, it also cannot tell how the methods are going to be used (that is, whether the methods are going to be called with invalid inputs in the given scopes or not).
The Ultimate Determining Factor
Although access modifiers for methods in the code can hint on how to use the methods, ultimately, it is humans who will use the methods, and it is up to the humans how they are going to use them and with what inputs. Thus, in some rare cases, it is possible to have a public method which is only called in some private scope and in that private scope, the inputs for the public methods are guaranteed to be valid before the public method is called.
In such cases then, even the access modifier is public, there isn't any real need to check for invalid inputs, except for robust design reason. And why is this so? Because there are humans who know completely when and how the methods shall be called!
Here we can see, there is no guarantee either that public method always require checking for invalid inputs. And if this is true for public methods, it must also be true for protected, internal, protected internal, and private methods as well.
Conclusions
So, in conclusion, we can say a couple of things to help us making decisions:
Generally, it is better to have checks for any invalid inputs for robust design reason, provided that performance is not at stake. This is true for any type of access modifiers.
The invalid inputs check could be skipped if performance gain could be significantly improved by doing so, provided that it can also be guaranteed that the scope where the methods are called are always giving the methods valid inputs.
private method is usually where we skip such checking, but there is no guarantee that we cannot do that for public method as well
Humans are the ones who ultimately use the methods. Regardless of how the access modifiers can hint the use of the methods, how the methods are actually used and called depend on the coders. Thus, we can only say about general/good practice, without restricting it to be the only way of doing it.
The public interface of your library deserves tight checking of preconditions, because you should expect the users of your library to make mistakes and violate the preconditions by accident. Help them understand what is going on in your library.
The private methods in your library do not require such runtime checking because you call them yourself. You are in full control of what you are passing. If you want to add checks because you are afraid to mess up, then use asserts. They will catch your own mistakes, but do not impede performance during runtime.
Though you tagged language-agnostic, it seems to me that it probably doesn't exist a general response.
Notably, in your example you hinted the argument: so with a language accepting hinting it'll fire an error as soon as entering the function, before you can take any action.
In such a case, the only solution is to have checked the argument before calling your function... but since you're writing a library, that cannot have sense!
In the other hand, with no hinting, it remains realistic to check inside the function.
So at this step of the reflexion, I'd already suggest to give up hinting.
Now let's go back to your precise question: to what level should it be checked?
For a given data piece it'd happen only at the highest level where it can "enter" (may be several occurrences for the same data), so logically it'd concern only public methods.
That's for the theory. But maybe you plan a huge, complex, library so it might be not easy to ensure having certainty about registering all "entry points".
In this case, I'd suggest the opposite: consider to merely apply your controls everywhere, then only omit it where you clearly see it's duplicate.
Hope this helps.
In my opinion you should ALWAYS check for "invalid" data - independent whether it is a private or public method.
Looked from the other way... why should you be able to work with something invalid just because the method is private? Doesn't make sense, right? Always try to use defensive programming and you will be happier in life ;-)
This is a question of preference. But consider instead why are you checking for null or rather checking for valid input. It's probably because you want to let the consumer of your library to know when he/she is using it incorrectly.
Let's imagine that we have implemented a class PersonList in a library. This list can only contain objects of the type Person. We have also on our PersonList implemented some operations and therefore we do not want it to contain any null values.
Consider the two following implementations of the Add method for this list:
Implementation 1
public void Add(Person item)
{
if(_size == _items.Length)
{
EnsureCapacity(_size + 1);
}
_items[_size++] = item;
}
Implementation 2
public void Add(Person item)
{
if(item == null)
{
throw new ArgumentNullException("Cannot add null to PersonList");
}
if(_size == _items.Length)
{
EnsureCapacity(_size + 1);
}
_items[_size++] = item;
}
Let's say we go with implementation 1
Null values can now be added in the list
All opoerations implemented on the list will have to handle theese null values
If we should check for and throw a exception in our operation, consumer will be notified about the exception when he/she is calling one of the operations and it will at this state be very unclear what he/she has done wrong (it just wouldn't make any sense to go for this approach).
If we instead choose to go with implementation 2, we make sure input to our library has the quality that we require for our class to operate on it. This means we only need to handle this here and then we can forget about it while we are implementing our other operations.
It will also become more clear for the consumer that he/she is using the library in the wrong way when he/she gets a ArgumentNullException on .Add instead of in .Sort or similair.
To sum it up my preference is to check for valid argument when it is being supplied by the consumer and it's not being handled by the private/internal methods of the library. This basically means we have to check arguments in constructors/methods that are public and takes parameters. Our private/internal methods can only be called from our public ones and they have allready checked the input which means we are good to go!
Using Code Contracts should also be considered when verifying input.
I acknowledge that they can be useful, but I'm trying to wrap my head around when I would actually want to have a func as a parameter of a method.
public void WeirdMethod(int myNumber, func op);
In terms of design and functionality, could someone explain to me some circumstances where I would want to consider this? Theories of "reusability" isn't going going to help me much. Real world scenarios would be best. Help me think like you lol.
Here's about all I know:
This would allow me to pass a delegate
This would allow me to use a lambda expression.
Yeap...
NOTE:
I know this thread will get closed since there's no "right" answer. But I think what clicked it for me just now was "delayed calculation".
Deferring operations until a later time. A very practical example is deferring change tracking until an object tree is fully populated. Each type or repository can tell you what it wants done, and the caller can decide when to actually do it.
Composition of logic (as Justin Niessner mentioned).
Abstraction, e.g. ("here's a contract that has inputs and ouputs, but I don't care what it's implementation is as long as it fulfills the contract). For example, you could pass a "statusWriter" Func to a method which might write to a console, debug window, log file, database, or do nothing at all. All the consuming method knows is that it consumes a type and invokes it when desired.
Along the same lines, passing a Func to a method allows an abstracted and simple way of allowing a where predicate to be defined by the caller. I use this paradigm frequently to support a strongly-typed filter to be applied to a result (not talking about LINQ to SQL, just filtering a list of information as the caller sees fit).
Elegant functional paradigms, such as this example which demonstrates recursion using anonymous functions. These constructs would be verbose/impossible without the ability to pass one function to another (especially in an abbreviated form).
A general scenario is when you must pass a delayed calculation to your method. This is useful when calculating something is expensive, for example, when you cache something.
public Guid GetFromCache(string key, Func<Guid> make) {
Guid res;
if (!cache.TryGetValue(key, out res)) {
res = make();
cache.Add(key, res);
}
return res;
}
Now you can call this method as follows:
Guid guid = GetFromCache(myKey, () => database.MakeNewGuid());
If you had something asynchronous and you wanted to give it a callback method?
They enable to you to Curry functions as well as use Functional Composition.
While you've almost certainly used delegates before (since that's what events are), LINQ is a prime example for when passing a delegate as a function parameter is useful.
Think about Where. You're supplying a piece of logic (specifically a definition of what meets your criteria--whatever they are) to a function that uses it as part of its execution.
In a thread resolved yesterday, #hvd showed me how to get "control" over exception handling by .Invoke when dealing with delegates of unknown type (an issue seen in libraries like Isis2, where the end-user provides polymorphic event handlers and the library type-matches to decide which to call). Hvd's suggestion revolved around knowing how many arguments the upcall handler received and then using that information to construct a generic of the right type, which allowed him to construct a dynamic object and invoke it. The sequence yielded full control over exception handling.
The core of his suggestion was that Isis2 might consider doing upcalls this way:
MethodInfo mi = typeof(Program).GetMethod("Foo", BindingFlags.Static | BindingFlags.NonPublic);
Delegate del = Delegate.CreateDelegate(typeof(Action<,>).MakeGenericType(mi.GetParameters().Select(p => p.ParameterType).ToArray()), mi);
((dynamic)del).Invoke(arg0, arg1);
Here's my question: Can anyone suggest a way to do this same thing that works for an arbitrary number of arguments? Clearly I can do a switch statement and write code for the case of 1 arg, 2, etc. But is there a way to do it where mi.GetParameters().Length tells us how many arguments?
As a capsule summary for those who don't want to click the link, the core issue is this: when doing these kinds of dynamic upcalls, the end-user (who registered the method being called) may throw an exception due to bugs. Turns out that when not running under Visual Studio -- when running directly in the CLR -- the C# .Invoke will catch and rethrow exceptions, packaging them as inner exceptions inside a InvocationTargetException. This unwinds the stack and causes the user to perceive the bug as having been some kind of problem with the code that called .Invoke (e.g. with MY code). This is why the C# reference manual argues that catch/rethrow is poor coding practice: one should only catch exceptions that one plans to handle...
hvd explained that this was basically because .Invoke had no clue as to the number or types of the arguments and in that mode, apparently, catchs and rethrows exceptions for some reason. His workaround essentially pins down the number of arguments (the generic in the example: Action<,>) and this apparently is enough so that .Invoke doesn't do a "universal catch". But to use his example for arbitrary code, I need a case for each possible number of parameters. Doable (after all, who would ever want more than 16?) but ugly!
Hence today's challenge: Improve that code so that with a similar 3 line snippet of C# it works no matter how many parameters. Of course the resulting delegate needs to be callable too, presumably with a vector of objects, one per argument...
PS: One reason for pessimism: Action itself comes in 16 forms, with 1 to 16 arguments. So to me this suggests that the C# developers didn't see a more general way to do it, and ended up with the version that would correspond to me using a switch statement (and I guess the switch would have cases for 0 to 16 arguments, since I would need an Action<...> with N type arguments to handle N user-supplied arguments!)
I don't want to leave this open forever, so I've done what I could to understand the core issue, including downloading the code for .Invoke in Mono. As far as I can tell, the original problem is simply due to an optimization that favors faster invocations at the cost of catching exceptions this way when a dynamic Invoke is done on an object with an argument vector. The code for a dynamic delegate created using the generic template simply doesn't have this catch in it.
Not a great answer but without access to the .NET implementation of Invoke, it apparently won't be possible to give a better one.