Looking through System.Linq.Enumerable in DotPeek I notice that some methods are flavoured with a [__DynamicallyInvokable] attribute.
What role does this attribute play? Is it something added by DotPeek or does it play another role, perhaps informing the compiler on how best to optimise the methods?
It is undocumented, but it looks like one of the optimizations in .NET 4.5. It appears to be used to prime the reflection type info cache, making subsequent reflection code on common framework types run faster. There's a comment about it in the Reference Source for System.Reflection.Assembly.cs, RuntimeAssembly.Flags property:
// Each blessed API will be annotated with a "__DynamicallyInvokableAttribute".
// This "__DynamicallyInvokableAttribute" is a type defined in its own assembly.
// So the ctor is always a MethodDef and the type a TypeDef.
// We cache this ctor MethodDef token for faster custom attribute lookup.
// If this attribute type doesn't exist in the assembly, it means the assembly
// doesn't contain any blessed APIs.
Type invocableAttribute = GetType("__DynamicallyInvokableAttribute", false);
if (invocableAttribute != null)
{
Contract.Assert(((MetadataToken)invocableAttribute.MetadataToken).IsTypeDef);
ConstructorInfo ctor = invocableAttribute.GetConstructor(Type.EmptyTypes);
Contract.Assert(ctor != null);
int token = ctor.MetadataToken;
Contract.Assert(((MetadataToken)token).IsMethodDef);
flags |= (ASSEMBLY_FLAGS)token & ASSEMBLY_FLAGS.ASSEMBLY_FLAGS_TOKEN_MASK;
}
Without further hints what a "blessed API" might mean. Although it is clear from the context that this will only work on types in the framework itself. There ought to be additional code somewhere that checks the attribute applied to types and methods. No idea where that is located, but given that it would have to need to have a view of all .NET types to have a shot at caching, I can only think of Ngen.exe.
I found that it's used in the Runtime*Info.IsNonW8PFrameworkAPI() suite of internal methods. Having this attribute placed on a member makes IsNonW8PFrameworkAPI() return false for it and thus makes the member available in WinRT applications and shuts up the The API '...' cannot be used on the current platform. exception.
Profiler writers should place this attribute on members emitted by their profiler into framework assemblies, if they want to access them under WinRT.
sorry if I answer late. This attribute is used to call unmanaged code function from managed code. It is used to separate the managed language from the unmanaged language. a barrier of two worlds. It is also used for security reasons. to make unmanaged code reflection inaccessible.
Related
My understanding so far is that Reflection is used to retrieve the names of Types and their members from an assembly via its metadata. I've come across a few examples of Reflection in action identical to the following example.
class Person
{
public string Name {get;set;}
}
static void Main(string[] args)
{
Person person = new Person();
person.Name = "John";
Console.WriteLine(person.GetType().GetProperty("Name").GetValue(person)); //John
person.Name = "Mary";
Console.WriteLine(person.GetType().GetProperty("Name").GetValue(person)); //Mary
}
I understand how Reflection can get names of Types, members, etc. as this is what's stored in an assembly's metadata, but how does Reflection retrieve the value associated with it? This is dynamic data that changes during a program's execution (as shown in the example), which an assembly's metadata doesn't contain (right?).
Would be grateful for clarification on how Reflection retrieves actual values and whether this is actually the case. Please correct anything I've said that's not correct!
The short answer is that reflection is a feature of the runtime, so it has access to runtime information. Were it a separate library with no runtime "hooks", you're right, it wouldn't be able to get the values of properties at runtime, or make calls, or anything else that wouldn't be essentially observable from the assembly file on disk.
Long answer where I prove this to myself:
Microsoft makes available a reference version of the C# source code used to write the base class libraries for .NET Framework. If we look at the PropertyInfo.GetValue(object) method you use in your example, it's defined here. Following the trail of calls we eventually get to an abstract method of the same name but different parameters. Further down in the source file is the implementing class, RuntimePropertyInfo, and its override of GetValue we see that it is implemented by calling the property's get accessor (as, under the hood, properties are just collections of methods with certain signature conventions - GetGetMethod is a funny name meaning "get me the method defined as get for the current property"):
MethodInfo m = GetGetMethod(true);
if (m == null)
throw new ArgumentException(System.Environment.GetResourceString("Arg_GetMethNotFnd"));
return m.Invoke(obj, invokeAttr, binder, index, null);
If we do a similar spelunking journey on MethodInfo.Invoke, we eventually reach RuntimeMethodHandle.InvokeMethod, which is declared:
[MethodImplAttribute(MethodImplOptions.InternalCall)]
internal extern static object InvokeMethod(object target, object[] arguments, Signature sig, bool constructor);
The extern keyword on a class means "I don't have the body for this method in C#, look elsewhere for it". For most users of C# this means they're using DllImport to reference native code, but this method has a different attribute: MethodImpl, with the MethodImplOptions.InternalCall enum value. This is the C# way of saying to the compiler, "I don't have the body, but the actual runtime itself does".
So at the end of our journey we reach the point where the Reflection API relies on the runtime itself. Of course, the runtime and the base class libraries have to be developed in tandem to ensure these sync-up points exist.
Interestingly, the actual standard for .NET - ECMA-335 - makes the Reflection API optional, when the implementation (meaning the runtime + base class libraries) adheres to the bare minimum "Kernel Profile" (cite: 6th Ed., §IV.4.1.3). So actually there are implementations of .NET where you're not explicitly allowed to inspect the runtime like this, but given the reliance some kinds of applications have on reflection, all big implementations (original .NET Framework, .NET Core / the new .NET, and Mono) provide it.
The TLDR is the CLR (Common Language Runtime) keeps track of all your objects in memory and when you call GetValue it retrieves the value for you.
Note this post is a little old, but here is a rough idea of what an object looks like in memory:
Please refer to the part that says Object Instance. An object laid out in memory, contains the following 4 items:
Syncblk
TypeHandle
Instance Fields
String Literals
When you call GetValue, using an OBJECTREF to the object instance, it finds the starting location where your instance fields are stored. From there it can figure out which instance to retrieve based on the type.
I believe the actual call is FieldDesc::GetInstanceField.
I am possibly asking the impossible but I shall ask anyway. From the following:
public SomeClassConstructor(SomeOtherClass someOtherClass, string someString){
...
}
Is it possible to access the arguments so that they can be iterated (for example some sort of reflection to access an IEnumerable<object> that contains the arguments)?
Note: The params[] collection is not an answer in this situation. Other restrictions are that this is to be used in a WinForms environment; .Net 4.5 is acceptable.
EDIT: In response to DavidG's comment I am after the objects themselves (aka values and names). The reason being that I have a requirement to log (serialized) the arguments when a form is opened (when a UAT flag is set in app.config!). Large objects have their serialization over ridden to return simple strings so as not to bloat in memory.).
No, you can't access parameter values via reflection. You can get the names, types, attributes etc - but not the values. That's true of methods, constructors, property setters, indexers etc.
You could potentially do it in a debugger API, but that's almost never the right approach.
For logging, you should either just do this manually:
Log("someOtherClass={0}, someString={1}", someOtherClass, someString);
or look into AOP to inject calls automatically - look at PostSharp for example.
Does anyone know how to add Code Contracts Ensures in ReSharper ExternalAnnotations? It's not there in the last v7.1.3 nor in the latest v8 EAP, and neither in any of the custom xmls floating around.
Specifically it should detect if a method does not return a null: Contract.Ensures(Contract.Result<T>() != null);
If you're attempting to simply appease the analysis engine, the simplest thing to use is [NotNull] in front of the method declaration. The Contract Annotations to which you posted a link above is a more powerful mechanism for defining relationships between input parameters and the return value, e.g., [ContactAnnotation("null => null")].
However, explicitly analyzing for a Contract.Ensures statement is an entirely different proposition, as no automatic analysis can be defined for this statement via [ContractAnnotation] or any other ReSharper annotation attribute.
The EndToEnd test of my application includes loading the releasedlls by hand.
During testing i always have the following loaded:
- NUnit shadowcopy of n debug assemblies
- Postbuildeventcopy of n release assemblies
Even if i am sure that the two copies are from the same build generation (version) casting of my reflection loading fails.
to give a little bit of context here is some pseudo code:
private HookingHelper globalhooker;
private Tools.ISomething globalmockery;
TestfixtureSetUp(){
globalhooker = new globalhooker();
globalhooker.Loadfrom("c:\postbuildcopy.dll");
globalmockery = Mockrepository.Generate<Tools.ISomething>();
globalhooker.SetViaReflection<Tools.ISomething>("nameofsomething", globalmockery);
}
I have a helper class which uses Loadfrom to get a static inside an assembly. Before i call anything i have to inject a mock.
This mock is created using the shadowcopy of a tools library in debug version since nunit creates it.
The loaded library is the release version, which is important to me since i want to do testing as close to the real environment as possible.
When i try to inject using reflection i have to use FieldInfo SetValue(...) the call looks something like this:
public static void ReplaceFieldPublicStatic<T>(Type type, string fieldname, T obj)
{
FieldInfo field = AssemblyHelper.GetFieldInfoPublicStatic(type, fieldname);
field.SetValue((T)obj, obj);
}
Somethimes the Reflection works and sometimes my types can not be casted into each other.
The error is an ArgumentException generated by FieldInfo SetValue(...) .
When i inercept the exception and investigate the difference between field.FieldType != typeof(T) only the GetHashCode() call gives a different value.
I think there is a little bit of randomness involved.
Can i force the Typecast? Is that even wise?
Is there something i need to do while buildung my projects that i am missing?
Even if i am sure that the two copies are from the same build generation (version) casting of my reflection loading fails.
Yes - if two types have come from two different Assembly objects, they are different types as far as the CLR is concerned. The assemblies could have been loaded from the exact same byte sequences, but they're still distinct assemblies.
Basically you'll need to pick one Assembly to use for each type.
I am designing a helper method that does lazy loading of certain objects for me, calling it looks like this:
public override EDC2_ORM.Customer Customer {
get { return LazyLoader.Get<EDC2_ORM.Customer>(
CustomerId, _customerDao, ()=>base.Customer, (x)=>Customer = x); }
set { base.Customer = value; }
}
when I compile this code I get the following warning:
Warning 5 Access to member
'EDC2_ORM.Billing.Contract.Site'
through a 'base' keyword from an
anonymous method, lambda expression,
query expression, or iterator results
in unverifiable code. Consider moving
the access into a helper method on the
containing type.
What exactly is the complaint here and why is what I'm doing bad?
"base.Foo" for a virtual method will make a non-virtual call on the parent definition of the method "Foo". Starting with CLR 2.0, the CLR decided that a non-virtual call on a virtual method can be a potential security hole and restricted the scenarios in which in can be used. They limited it to making non-virtual calls to virtual methods within the same class hierarchy.
Lambda expressions put a kink in the process. Lambda expressions often generate a closure under the hood which is a completely separate class. So the code "base.Foo" will eventually become an expression in an entirely new class. This creates a verification exception with the CLR. Hence C# issues a warning.
Side Note: The equivalent code will work in VB. In VB for non-virtual calls to a virtual method, a method stub will be generated in the original class. The non-virtual call will be performed in this method. The "base.Foo" will be redirected into "StubBaseFoo" (generated name is different).
I suspect the problem is that you're basically saying, "I don't want to use the most derived implementation of Customer - I want to use this particular one" - which you wouldn't be able to do normally. You're allowed to do it within a derived class, and for good reasons, but from other types you'd be violating encapsulation.
Now, when you use an anonymous method, lambda expression, query expression (which basically uses lambda expressions) or iterator block, sometimes the compiler has to create a new class for you behind the scenes. Sometimes it can get away with creating a new method in the same type for lambda expressions, but it depends on the context. Basically if any local variables are captured in the lambda expression, that needs a new class (or indeed multiple classes, depending on scope - it can get nasty). If the lambda expression only captures the this reference, a new instance method can be created for the lambda expression logic. If nothing is captured, a static method is fine.
So, although the C# compiler knows that really you're not violating encapsulation, the CLR doesn't - so it treats the code with some suspicion. If you're running under full trust, that's probably not an issue, but under other trust levels (I don't know the details offhand) your code won't be allowed to run.
Does that help?
copy/pasting from here:
Codesta: C#/CLR has 2 kinds of code, safe and unsafe. What is it trying to provide and how did this affect the virtual machine?
Peter Hallam: For C# the terms are safe and unsafe. The CLR uses the terms verifiable and unverifiable.
When running verifiable code the CLR can enforce security policies; the CLR can prevent verifiable code from doing things that it doesn't have permission to do. When running potentially malicious code, code that was downloaded from the internet for example, the CLR will only run verifiable code, and will ensure that the untrusted code doesn't access anything that it doesn't have permission to access.
The use of standard C style pointers creates unverifiable code. The CLR supports C style pointers natively. Once you've got a C style pointer you can read or write to any byte of memory in the process, so the runtime cannot enforce security policy. Actually it could but the performance penalty would make it impractical.
Now, that does not fully answer your question (i.e. WHY is this now unverifiable code), but at least it explains that "unverifiable" is the CLR-term for "unsafe". I assume that anonymous methods and base classes result in some funky pointer-magic internally.
By the Way: I think that the code snippet does not match the Warning message. The code is talking about a Customer, the Warning is about the Billing. Is it possible to post the actuon code the warning is generated for? Maybe you have something else in that code that would better explain why you get the warning.