Syntax alternatives to casting of dynamic objects - c#

I have an implementation of DynamicDictionary where all of the entries in the dictionary are of a known type:
public class FooClass
{
public void SomeMethod()
{
}
}
dynamic dictionary = new DynamicDictionary<FooClass>();
dictionary.foo = new FooClass();
dictionary.foo2 = new FooClass();
dictionary.foo3 = DateTime.Now; <--throws exception since DateTime is not FooClass
What I'd like is to be able to have Visual Studio Intellisense work when referencing a method of one of the dictionary entries:
dictionary.foo.SomeMethod() <--would like SomeMethod to pop up in intellisense
The only way I've found to do this is:
((FooClass)dictionary.foo).SomeMethod()
Can anyone recommend a more elegant syntax? I'm comfortable writing a custom implementation of DynamicDictionary with IDynamicMetaObjectProvider.
UPDATE:
Some have asked why dynamics and what my specific problem is. I have a system that lets me do something like this:
UI.Map<Foo>().Action<int, object>(x => x.SomeMethodWithParameters).Validate((parameters) =>
{
//do some method validation on the parameters
return true; //return true for now
}).WithMessage("The parameters are not valid");
In this case the method SomeMethodWithParameters has the signature
public void SomeMethodWithParameters(int index, object target)
{
}
What I have right now for registering validation for individual parameters looks like this:
UI.Map<Foo>().Action<int, object>(x => x.SomeMethodWithParameters).GetParameter("index").Validate((val) =>
{
return true; //valid
}).WithMessage("index is not valid");
What I'd like it to be is:
UI.Map<Foo>().Action<int, object(x => x.SomeMethodWithParameters).index.Validate((val) =>
{
return true;
}).WithMessage("index is not valid");
This works using dynamics, but you lose intellisense after the reference to index - which is fine for now. The question is is there a clever syntactical way (other than the ones metioned above) to get Visual Studio to recognize the type somehow. Sounds so far like the answer is "no".
It seems to me that if there was a generic version of IDynamicMetaObjectProvider,
IDynamicMetaObjectProvider<T>
this could be made to work. But there isn't, hence the question.

In order to get intellisense, you're going to have to cast something to a value that is not dynamic at some point. If you find yourself doing this a lot, you can use helper methods to ease the pain somewhat:
GetFoo(dictionary.Foo).SomeMethod();
But that isn't much of an improvement over what you've got already. The only other way to get intellisense would be to cast the value back to a non-dynamic type or avoid dynamic in the first place.
If you want to use Intellisense, it's usually best to avoid using dynamic in the first place.
typedDictionary["foo"].SomeMethod();
Your example makes it seem likely that you have specific expectations about the structure of your dynamic object. Consider whether there's a way to create a static class structure that would fulfill your needs.
Update
In response to your update: If you don't want to drastically change your syntax, I'd suggest using an indexer so that your syntax can look like this:
UI.Map<Foo>().Action<int, object>(x => x.SomeMethodWithParameters)["index"].Validate((val) => {...});
Here's my reasoning:
You only add four characters (and subtract one) compared to the dynamic approach.
Let's face it: you are using a "magic string." By requiring an actual string, this fact will be immediately obvious to programmers who look at this code. Using the dynamic approach, there's nothing to indicate that "index" is not a known value from the compiler's perspective.
If you're willing to change things around quite a bit, you may want to investigate the way Moq plays with expressions in their syntax, particularly the It.IsAny<T>() method. It seems like you might be able to do something more along these lines:
UI.Map<Foo>().Action(
(x, v) => x.SomeMethodWithParameters(
v.Validate<int>(index => {return index > 1;})
.WithMessage("index is not valid"),
v.AlwaysValid<object>()));
Unlike your current solution:
This wouldn't break if you ended up changing the names of the parameters in the method signature: Just like the compiler, the framework would pay more attention to the location and types of the parameters than to their names.
Any changes to the method signature would cause an immediate flag from the compiler, rather than a runtime exception when the code runs.
Another syntax that's probably slightly easier to accomplish (since it wouldn't require parsing expression trees) might be:
UI.Map<Foo>().Action((x, v) => x.SomeMethodWithParameters)
.Validate(v => new{
index = v.ByMethod<int>(i => {return i > 1;}),
target = v.IsNotNull()});
This doesn't give you the advantages listed above, but it still gives you type safety (and therefore intellisense). Pick your poison.

Aside from Explict Cast,
((FooClass)dictionary.foo).SomeMethod();
or Safe Cast,
(dictionary.foo as FooClass).SomeMethod();
the only other way to switch back to static invocation (which will allow intellisense to work) is to do Implicit Cast:
FooClass foo = dictionary.foo;
foo.SomeMethod().
Declared casting is your only option, can't use helper methods because they will be dynamically invoked giving you the same problem.
Update:
Not sure if this is more elegant but doesn't involve casting a bunch and gets intellisense outside of the lambda:
public class DynamicDictionary<T>:IDynamicMetaObjectProvider{
...
public T Get(Func<dynamic,dynamic> arg){
return arg(this);
}
public void Set(Action<dynamic> arg){
arg(this);
}
}
...
var dictionary = new DynamicDictionary<FooClass>();
dictionary.Set(d=>d.Foo = new FooClass());
dictionary.Get(d=>d.Foo).SomeMethod();

As has already been said (in the question and StriplingWarrior answer) the C# 4 dynamic type does not provide intellisense support. This answer is provided merely to provide an explanation why (based on my understanding).
dynamic to the C# compiler is nothing more than object which has only limited knowledge at compile-time which members it supports. The difference is, at run-time, dynamic attempts to resolve members called against its instances against the type for which the instance it represents knows (providing a form of late binding).
Consider the following:
dynamic v = 0;
v += 1;
Console.WriteLine("First: {0}", v);
// ---
v = "Hello";
v += " World";
Console.WriteLine("Second: {0}", v);
In this snippet, v represents both an instance of Int32 (as seen in the first section of code) and an instance of String in the latter. The use of the += operator actually differs between the two different calls to it because the types involved are inferred at run-time (meaning the compiler doesn't understand or infer usage of the types at compile-time).
Now consider a slight variation:
dynamic v;
if (DateTime.Now.Second % 2 == 0)
v = 0;
else
v = "Hello";
v += 1;
Console.WriteLine("{0}", v);
In this example, v could potentially be either an Int32 or a String depending on the time at which the code is run. An extreme example, I know, though it clearly illustrates the problem.
Considering a single dynamic variable could potentially represent any number of types at run-time, it would be nearly impossible for the compiler or IDE to make assumptions about the types it represents prior to it's execution, so Design- or Compile-time resolution of a dynamic variable's potential members is unreasonable (if not impossible).

Related

String.IsNullOrEmpty Monad

I have lately been dipping my feet into the fascinating world of functional programming, largely due to gaining experience in FP platforms like React and reading up on blogs the likes of https://blog.ploeh.dk/. As a primarily imperative programmer, this has been an interesting transition, but I am still trying to get my wet feet under me.
I am getting a little tired of using string.IsNullOrEmpty as such. So much of the time I find myself littering my code with expressions such as
_ = string.IsNullOrEmpty(str) ? "default text here" : str;
which isn't so bad as it goes, but say I wanted to chain a bunch of options past that null, e.g.
_ = string.IsNullOrEmpty(str) ? (
util.TryGrabbingMeAnother() ??
"default text here") : str;
Yuck. I'd much rather have something like this --
_ = monad.NonEmptyOrNull(str) ??
util.TryGrabbingMeAnother() ??
"default text here";
As the sample indicates, I am using a function that I am referring to as a monad to help reduce string.IsNullOrEmpty to a null-chainable operation:
public string NonEmptyOrNull(string source) =>
string.IsNullOrEmpty(source) ? null : source;
My question is, is this proper terminology? I know Nullable<T> can be considered a monad (see Can Nullable be used as a functor in C#? and Monad in plain English? (For the OOP programmer with no FP background)). These materials are good references, but I still don't have quite enough an intuitive grasp of the subject to know if I'm not just being confusing or inconsistent here. For example, I know monads are supposed to enable function chaining like I have above, but they are also "type amplifiers" -- so my little example seems to behave like a monad for enabling chaining, but it seems like converting null/empty to just null is a reduction rather than an amplification, so I question whether this actually is a monad. So for this particular application, could someone who has a little more experience with FP tell me whether or not it is accurate to call NonEmptyOrNull a monad, and why or why not?
A monad is a triple consisting of:
A single-argument type constructor M
A function unit of type a -> M a
A function join of type M (M a) -> a
which satisfies the monad laws.
A type constructor is a type-level function which takes a number of type arguments and returns a type. C# doesn't have this feature directly but when encoding monads, you need a single-argument generic type e.g. List<T>, Task<T> etc. For some generic type M you therefore need two functions which construct an instance of the generic type from a single value, an 'flattens' a nested instance of the type. For example for List<T>:
public static List<T> unit<T>(T value) { return new List<T> { value }; }
public static List<T> join<T>(List<List<T>> l) { return l.SelectMany(l => l); }
From this definition you can see that a single function cannot satisfy the definition ofa monad, so your example is not an example of a monad.
By this definition, Nullable<T> also does not have a monad instance since the nested type Nullable<Nullable<T>> cannot be constructed, so join cannot be implemented.
This is more like a filter operation. In C#, you'd idiomatically call it Where. It may be easier to see if we make the distinction between absent and populated values more explicit, which we can do with the Maybe container:
public static Maybe<T> Where<T>(
this Maybe<T> source,
Func<T, bool> predicate)
{
return source.SelectMany(x => predicate(x) ? x.ToMaybe() : Maybe.Empty<T>());
}
There's only a few containers that support filtering. The two most common ones are Maybe (AKA Option) and various collections (i.e. IEnumerable<T>).
In Haskell (which has a more powerful type system than C#) this is enabled via a class named MonadPlus, but I think that the type class Alternative actually ought to be enough to implement filtering. Alternative is described as a monoid on applicative functors. I'm not sure that that's particularly helpful, though.
With the above Where method, you could thread Maybe values through checks like IsNullOrEmpty like this:
var m = "foo".ToMaybe();
var inspected = m.Where(s => !string.IsNullOrEmpty(s));
This will let m pass through unchanged, while the following will not:
var m = "".ToMaybe();
var inspected = m.Where(s => !string.IsNullOrEmpty(s));
You could do the same with Nullable<T>, but I'll leave that as an exercise 😉
It's also possible that you could do it with the new nullable reference types language feature of C# 8, but I haven't tried yet.
I believe this is usually solved in the FP paradigm a step ahead of validating null. The str value must never be null. Instead the original method must return an empty collection. This way the chaining of methods do not have to validate null. The next operation will not execute since there are no elements to operate on
There are multiple references you can find. related to this on the internet. https://www.informit.com/articles/article.aspx?p=2133373&seqNum=5 is one I could quickly grab
I learnt this from Zoran Horvat course in Pluralsight. If you do have access please check it out. The course name is "
Tactical Design Patterns in .NET: Control Flow" and the module is "Null Object and Special Case Patterns"
Taking about interest in FP, Zoran Horvat also has other courses that help convert or make OO code more fuctional. I'm quite excited in responding here because lately I've been looking into FP as well. Good luck!

Should an expression of type ‘dynamic’ behave the same way at run-time as a non-dynamic one of the same run-type time?

Consider the following example program:
using System;
public delegate string MyDelegateType(int integer);
partial class Program
{
static string MyMethod(int integer) { return integer.ToString(); }
static void Main()
{
Func<int, string> func = MyMethod;
// Scenario 1: works
var newDelegate1 = new MyDelegateType(func);
newDelegate1(47);
// Scenario 2: doesn’t work
dynamic dyn = func;
var newDelegate2 = new MyDelegateType(dyn);
newDelegate2(47);
}
}
The first one works as expected — the conversion to MyDelegateType succeeds. The second one, however, throws a RuntimeBinderException with the error message:
Cannot implicitly convert type 'System.Func<int,string>' to 'MyDelegateType'
Is there anything in the C# specification that allows for this behaviour, or is this a bug in Microsoft’s C# compiler?
Good catch Timwi.
Our support for dynamic method groups is weak. For example, consider this simpler case:
class C
{
public void M() {}
}
class P
{
static void Main()
{
dynamic d = new C();
C c = new C();
Action a1 = c.M; // works
Action a2 = d.M; // fails at runtime
The d.M is interpreted as a property get (or field access) by the dynamic runtime, and when it resolves as a method group, it fails at runtime.
The same thing is happening in your case, it is just a bit more obscure. When you say MyDelegate x = new MyDelegate(someOtherDelegate); that is treated by the compiler just as if you'd said MyDelegate x = someOtherDelegate.Invoke;. The dynamic runtime piece does not know to do that transformation, and even if it did, it couldn't handle resolving the method group that is the result of the .Invoke portion of the expression.
Is there anything in the C# specification that allows for this behaviour, or is this a bug in Microsoft’s C# compiler?
The spec does not call out that this should be a runtime error, and does imply that it should be handled correctly at runtime; clearly the implementation does not do so. Though it is a shortcoming of the implementation I wouldn't call this a "bug" because we deliberately made the behaviour you've discovered. We did not have the resources to make these kinds of expressions work exactly right, so we left them as errors. If we ever get a good way to represent method groups in the dynamic runtime, we might implement it.
Similarly there is no way in dynamic code to represent the notion of "this dynamic thing is a lambda expression where the types of the parameters are to be determined at runtime". If we have a good way to represent those in the future, we might do the work.
Sam talked a bit about this back in 2008; see his article on it:
http://blogs.msdn.com/b/samng/archive/2008/11/02/dynamic-in-c-ii-basics.aspx
I've run in to this limitation too. Although I couldn't answer the why better than Eric Lippert, there is a straight forward workaround.
var newDelegate2 = new MyDelegateType(x => dyn(x));
It implicitly gets the static signature from the delegate and the dynamic invocation works without any more info. This works for delegates and, as a bonus, dynamic callable objects.

Type inference in C#

I know msdn should probably be the first place to go and it will be after I get the scoop here. What the msdn would not really provide as part of the technical specification is what I am about to ask now:
How exactly the subject is useful in day to day development process?
Does it have a correlation in any shape or form with anonymous types inside clr?
What does it allow for what otherwise could not have been done without it?
Which .net features are dependent upon the subject and could not have exist without as part of the framework?
To bring a note of specifics to the question, it would be really interesting to know (in pseudo code) of how the compiler can actually determine the needed type if the method was called using lambdas and type inference
I am looking to see the compiler logical flow on how to locate that type.
Type inference occurs in many places in C#, at least the following:
The var keyword, which tells the compiler to infer (deduce) the correct type for the variable from what you initialize it with
The ability to leave type parameters out of a generic method call as long as they can be deduced from the parameters
The ability to leave out types from lambda expression arguments, as long as they can be deduced
And to answer your questions:
1) It saves a lot of typing, especially when using the so-called "LINQ methods". Compare for example
List<string> myList = new List<string>();
// ...
IEnumerable<string> result = myList.Where<string>((string s) => s.Length > 0)
.Select<string, string>((string s) => s.ToLower());
versus
var myList = new List<string>();
// ...
var result = myList.Where(s => s.Length > 0).Select(s => s.ToLower());
2) I don't know what you mean by "correlation", but without the var keyword you couldn't have variables refer to anonymous types in a type-safe way (you could always use object or dynamic), which makes it pretty important when using anonymous types.
3) Nothing as far as I can think of. It's only a convenience feature. Of course its absence would make, for instance, the aforementioned anonymous types less useful, but they're mostly a convenience feature as well.
4) I think 3) answers this as well.
It is syntactic sugar.
Not that I know about.
It greatly simplifies the programmers job.
Linq.

How to do a static cast in C#?

Given a couple types like this:
interface I {}
class C : I {}
How can I do a static type cast? By this I mean: how can I change its type in a way that gets checked at compile time?
In C++ you can do static_cast<I*>(c). In C# the best I can do is create a temporary variable of the alternate type and try to assign it:
var c = new C();
I i = c; // statically checked
But this prevents fluent programming. I have to create a new variable just to do the type check. So I've settled on something like this:
class C : I
{
public I I { get { return this; } }
}
Now I can statically convert C to I by just calling c.I.
Is there a better way to do this in C#?
(In case anyone's wondering why I want to do this, it's because I use explicit interface implementations, and calling one of those from within another member function requires a cast to the interface type first, otherwise the compiler can't find the method.)
UPDATE
Another option I came up with is an object extension:
public static class ObjectExtensions
{
[DebuggerStepThrough]
public static T StaticTo<T>(this T o)
{
return o;
}
}
So ((I)c).Doit() could also be c.StaticTo<I>().Doit(). Hmm...probably will still stick with the simple cast. Figured I'd post this other option anyway.
Simply cast it:
(I)c
Edit Example:
var c = new C();
((I)c).MethodOnI();
Write an extension method that uses the trick you mentioned in your UPDATE:
public static class ObjectExtensions
{
public static T StaticCast<T>(this T o) => o;
}
To use:
things.StaticCast<IEnumerable>().GetEnumerator();
If things is, e.g., IEnumerable<object>, this compiles. If things is object, it fails.
// Compiles (because IEnumerable<char> is known at compiletime
// to be IEnumerable too).
"adsf".StaticCast<IEnumerable>().GetEnumerator();
// error CS1929: 'object' does not contain a definition for 'StaticCast'
// and the best extension method overload
// 'ObjectExtensions.StaticCast<IEnumerable>(IEnumerable)'
// requires a receiver of type 'IEnumerable'
new object().StaticCast<IEnumerable>().GetEnumerator();
Why Use a Static Cast?
One common practice during refactoring is to go ahead and make your changes and then verify that your changes have not caused any regressions. You can detect regressions in various ways and at various stages. For example, some types of refactoring may result in API changes/breakage and require refactoring other parts of the codebase.
If one part of your code expects to receive a type (ClassA) that should be known at compiletime to implement an interface (IInterfaceA) and that code wants to access interface members directly, it may have to cast down to the interface type to, e.g., access explicitly implemented interface members. If, after refactoring, ClassA no longer implements IIterfaceA, you get different types of errors depending on how you casted down to the interface:
C-style cast: ((IInterfaceA)MethodReturningClassA()).Act(); would suddenly become a runtime cast and throw a runtime error.
Assigning to an explicitly-typed variable: IInterfaceA a = MethodReturningClassA(); a.Act(); would raise a compiletime error.
Using the static_cast<T>-like extension method: MethodReturningClassA().StaticCast<IInterfaceA>().Act(); would raise a compiletime error.
If you expected your cast to be a downcast and to be verifiable at compiletime, then you should use a casting method that forces compiletime verification. This makes the intentions of the code’s original developer to write typesafe code clear. And writing typesafe code has the benefit of being more verifiable at compiletime. By doing a little bit of work to clarify your intention to opt into typesafety to both other developers, yourself, and the compiler, you magically get the compiler’s help in verifying your code and can catch repercussions of refactoring earlier (at compiletime) than later (such as a runtime crash if your code didn’t happen to have full test coverage).
var c = new C();
I i = c; // statically checked
equals to
I i = new C();
If you're really just looking for a way to see if an object implements a specific type, you should use as.
I i = whatever as i;
if (i == null) // It wasn't
Otherwise, you just cast it. (There aren't really multiple types of casting in .NET like there are in C++ -- unless you get deeper than most people need to, but then it's more about WeakReference and such things.)
I i = (I)c;
If you're just looking for a convenient way to turn anything implementing I into an I, then you could use an extension method or something similar.
public static I ToI(this I #this)
{
return #this;
}

What advantages does using var have over the explicit type in C#? [duplicate]

This question already has answers here:
Closed 12 years ago.
Possible Duplicates:
What’s the point of the var keyword?
Use of var keyword in C#
I understand how IEnumerable<...> for a datatype can make the code a little less readable or how nested generics can seem a little daunting. But aside from code readability, are there advantages to using var instead of the explicit type? It seems like by using the explicit type, you'd better convey what the variable is capable of because you know what it is.
If it's a workplace coding standard, I use it for the sake of teamwork. In my own projects however, I prefer to avoid the user of var.
The point of var is to allow anonymous types, without it they would not be possible and that is the reason it exists. All other uses I consider to be lazy coding.
Using var as the iterator variable for a foreach block is more type safe than explicit type names. For example
class Item {
public string Name;
}
foreach ( Item x in col ) {
Console.WriteLine(x.Name);
}
This code could compile without warnings and still cause a runtime casting error. This is because the foreach loop works with both IEnumerable and IEnumerable<T>. The former returns values typed as object and the C# compiler just does the casting to Item under the hood for you. Hence it's unsafe and can lead to runtime errors because an IEnumerable can contain objects of any type.
On the other hand the following code will only do one of the following
Not compile because x is typed to object or another type which does not have a Name field / property
Compile and be guaranteed to not have a runtime cast error while enumerating.
The type of 'x' will be object in the case of IEnumerable and T in the case of IEnumerable<T>. No casting is done by the compiler.
foreach ( var x in col ) {
Console.WriteLine(x.Name);
}
I like it, especially in unit tests, because as the code evolves I only have to fix up the right-hand side of the declaration/assignment. Obviously I also have to update to reflect the changes in usage, but at the point of declaration I only have to make one change.
It produces no meaningful change in the emitted IL. It is merely a code style preference.
I, for one, like it, especially when dealing with types that have long, generic, almost unreadable names such as Dictionary<string, IQueryable<TValue1, TValue2>>[].
The var is just a syntactic sugar. It is always known at compile time what type the variable is. There are no other advantages of using the var keyword.
There aren't any real differences. Some people suggest using the explicit type because it can make maintaining the code easier. However, people that push for var have the stance that "if we use var, we are forced to use good naming conventions".
Of course if you use vars with the intention of having good naming conventions and that breaks down, it's more painful down the road. (IMO)
public IAwesome { string Whatever { get; } }
public SoCool : IAwesome { public string Whatever { get; } }
public HeyHey
{
public SoCool GetSoCool() { return new SoCool(); }
public void Processy()
{
var blech = GetSoCool();
IAwesome ohYeah = GetSoCool();
// Now blech != ohYeah, so var is blech and ohYeah is IAwesome.
}
}
Besides the readability aspect you mentioned, 'var' also has the benefit of reducing the probability that a trivial code change will break other parts of your code. If you rename a type, for example. Or if you switch to a different type that is mostly compatible with the former type (e.g. changing from Foo[] to IEnumerable) you have much less work to do to get your code back to a compilable state.
You can abstract away the mental complexity of the technicalities to focus purely on the problem domain from your model. you have to make sure your variables are named meaningfully tho.

Categories

Resources