Detecting Missing Method - c#

Assume I have 2 dlls maintained by 2 different teams:
Team1.dll (v1.0)
public class Foo
{
int GetValue() { return 3; }
}
Team2.dll (v1.0)
public class Bar
{
public int IncFooValue(Foo foo) { return foo.GetValue() + 1; }
}
When Team1.dll (v1.0) and Team2.dll (v1.0) are executed, everything is fine. But assume that Team1.dll were changed & the method Foo.GetValue() were removed (v1.1) and dropped next to Team2.dll (all without rebuilding Team2.dll). If executed, then you would get a MissingMethodException.
Question: How could I detect if Team1.dll is no longer compatible with Team2.dll without executing them?
For example, something like:
Foreach Class in Team2.dll
Foreach Method in Class
Foreach Instruction in Method
If Instruction not exists in Team1.dll
Throw "Does not exist"

It is possible to detect missing methods by forcing JIT compilation of your methods.
For that you just need enumerate all methods in your assemblies using reflection and then call RuntimeHelpers.PrepareMethod(method.MethodHandle) for each of them.
The tricky part here is dealing with generics. If your method or class contains generics, you also need to specify the concrete types for which your generic method will be jitted:
Type[] classGenericArgs = ...;
Type[] methodGenericArgs = ...;
Type[] allGenericArgs = classGenericArgs.Concat(methodGenericArgs).ToArray();
RuntimeHelpers.PrepareMethod(method.MethodHandle, allGenericArgs.Select(p => p.TypeHandle).ToArray());`
If your generic arguments also have constraints, you need to make sure that the types you choose satisfy these constraints, otherwise, jitting will fail.
Satisfying constraints can be difficult, so I wrote a Jitter class to automate this. It forces load of all libraries referenced by your application into memory and runs jitting with automatically substituting appropriate concrete classes for your generic methods (if possible). The usage for it is simple, just specify which assemblies you want jitted:
Jitter.RunJitting(asm => asm.FullName.StartsWith("My.Company.Namespace"));
Make sure your .NET app references all assemblies you want to verify.
You can find the source code for Jitter here. It is far from perfect, but is able to jit 99.8% of all methods in our codebase and detect broken package dependencies for us.

Check if the type contains the definition of the method.
if (foo.GetType().GetMethod("GetValue") != null)
{
return foo.GetValue() + 1;
}
as others have suggested you may be better off pursuing another strategy (better versioning/backwards compatibility or handling the exception on the calling end.

Related

Delegate does not contain a definition for 'CreateDelegate'

Using Unity 2018-2017 with same problem on building for net-
error CS0117: 'Delegate' does not contain a definition for 'CreateDelegate'
This is the method:
private V CreateDelegate<V>(MethodInfo method, Object target) where V : class
{
var ret = (Delegate.CreateDelegate(typeof(V), target, method) as V);
if (ret == null)
{
throw new ArgumentException("Unabled to create delegate for method called " + method.Name);
}
return ret;
}
Building for UWP.
Using system.Linq
I tryed with "MethodInfo" but maybe some parameters are wrong.
This method isn´t available?
Which platform/runtime are you targeting? I don't know about Mono, but .Net standard 1.x doesn't support Delegate.CreateDelegate. Always keep in mind that you're writing your code against a limited subset of the .Net framework. Also keep in mind that your code will inevitably be AOT-compiled on some platforms (il2cpp, iOS, etc.), so some reflection/emit features will be unavailable.
Note: AOT means ahead-of-time, meaning your code is compiled to machine instructions rather than an intermediate language. Reflection is when you use the code itself as data, so for example you can get a list of the properties a class defines. Emit means generating code at runtime. If you don't understand what those are, you should probably do some studying. It's well worth the effort in the long run.
1. Your return type is a class, not a delegate.
where V : class
So this method doesn't even make sense. You're going to get an invalid cast exception.
2. CreateDelegate takes 2 parameters, not 3.
I'm not even sure what purpose target even serves here, so I can't even guess what you're trying to do.

How are C# Generics implemented?

I had thought that Generics in C# were implemented such that a new class/method/what-have-you was generated, either at run-time or compile-time, when a new generic type was used, similar to C++ templates (which I've never actually looked into and I very well could be wrong, about which I'd gladly accept correction).
But in my coding I came up with an exact counterexample:
static class Program {
static void Main()
{
Test testVar = new Test();
GenericTest<Test> genericTest = new GenericTest<Test>();
int gen = genericTest.Get(testVar);
RegularTest regTest = new RegularTest();
int reg = regTest.Get(testVar);
if (gen == ((object)testVar).GetHashCode())
{
Console.WriteLine("Got Object's hashcode from GenericTest!");
}
if (reg == testVar.GetHashCode())
{
Console.WriteLine("Got Test's hashcode from RegularTest!");
}
}
class Test
{
public new int GetHashCode()
{
return 0;
}
}
class GenericTest<T>
{
public int Get(T obj)
{
return obj.GetHashCode();
}
}
class RegularTest
{
public int Get(Test obj)
{
return obj.GetHashCode();
}
}
}
Both of those console lines print.
I know that the actual reason this happens is that the virtual call to Object.GetHashCode() doesn't resolve to Test.GetHashCode() because the method in Test is marked as new rather than override. Therefore, I know if I used "override" rather than "new" on Test.GetHashCode() then the return of 0 would polymorphically override the method GetHashCode in object and this wouldn't be true, but according to my (previous) understanding of C# generics it wouldn't have mattered because every instance of T would have been replaced with Test, and thus the method call would have statically (or at generic resolution time) been resolved to the "new" method.
So my question is this: How are generics implemented in C#? I don't know CIL bytecode, but I do know Java bytecode so I understand how Object-oriented CLI languages work at a low level. Feel free to explain at that level.
As an aside, I thought C# generics were implemented that way because everyone always calls the generic system in C# "True Generics," compared to the type-erasure system of Java.
In GenericTest<T>.Get(T), the C# compiler has already picked that object.GetHashCode should be called (virtually). There's no way this will resolve to the "new" GetHashCode method at runtime (which will have its own slot in the method-table, rather than overriding the slot for object.GetHashCode).
From Eric Lippert's What's the difference, part one: Generics are not templates, the issue is explained (the setup used is slightly different, but the lessons translate well to your scenario):
This illustrates that generics in C# are not like templates in C++.
You can think of templates as a fancy-pants search-and-replace
mechanism.[...] That’s not how generic types work; generic types are,
well, generic. We do the overload resolution once and bake in the
result. [...] The IL we’ve generated for the generic type already has
the method its going to call picked out. The jitter does not say
“well, I happen to know that if we asked the C# compiler to execute
right now with this additional information then it would have picked a
different overload. Let me rewrite the generated code to ignore the
code that the C# compiler originally generated...” The jitter knows
nothing about the rules of C#.
And a workaround for your desired semantics:
Now, if you do want overload resolution to be re-executed at runtime based on the runtime types of
the arguments, we can do that for you; that’s what the new “dynamic”
feature does in C# 4.0. Just replace “object” with “dynamic” and when
you make a call involving that object, we’ll run the overload
resolution algorithm at runtime and dynamically spit code that calls
the method that the compiler would have picked, had it known all the
runtime types at compile time.

Failing to cast namespaceA.objectA to namespaceA.objectA

The EndToEnd test of my application includes loading the releasedlls by hand.
During testing i always have the following loaded:
- NUnit shadowcopy of n debug assemblies
- Postbuildeventcopy of n release assemblies
Even if i am sure that the two copies are from the same build generation (version) casting of my reflection loading fails.
to give a little bit of context here is some pseudo code:
private HookingHelper globalhooker;
private Tools.ISomething globalmockery;
TestfixtureSetUp(){
globalhooker = new globalhooker();
globalhooker.Loadfrom("c:\postbuildcopy.dll");
globalmockery = Mockrepository.Generate<Tools.ISomething>();
globalhooker.SetViaReflection<Tools.ISomething>("nameofsomething", globalmockery);
}
I have a helper class which uses Loadfrom to get a static inside an assembly. Before i call anything i have to inject a mock.
This mock is created using the shadowcopy of a tools library in debug version since nunit creates it.
The loaded library is the release version, which is important to me since i want to do testing as close to the real environment as possible.
When i try to inject using reflection i have to use FieldInfo SetValue(...) the call looks something like this:
public static void ReplaceFieldPublicStatic<T>(Type type, string fieldname, T obj)
{
FieldInfo field = AssemblyHelper.GetFieldInfoPublicStatic(type, fieldname);
field.SetValue((T)obj, obj);
}
Somethimes the Reflection works and sometimes my types can not be casted into each other.
The error is an ArgumentException generated by FieldInfo SetValue(...) .
When i inercept the exception and investigate the difference between field.FieldType != typeof(T) only the GetHashCode() call gives a different value.
I think there is a little bit of randomness involved.
Can i force the Typecast? Is that even wise?
Is there something i need to do while buildung my projects that i am missing?
Even if i am sure that the two copies are from the same build generation (version) casting of my reflection loading fails.
Yes - if two types have come from two different Assembly objects, they are different types as far as the CLR is concerned. The assemblies could have been loaded from the exact same byte sequences, but they're still distinct assemblies.
Basically you'll need to pick one Assembly to use for each type.

'Delegate 'System.Action' does not take 0 arguments.' Is this a C# compiler bug (lambdas + two projects)?

Consider the code below. Looks like perfectly valid C# code right?
//Project B
using System;
public delegate void ActionSurrogate(Action addEvent);
//public delegate void ActionSurrogate2();
// Using ActionSurrogate2 instead of System.Action results in the same error
// Using a dummy parameter (Action<double, int>) results in the same error
// Project A
public static class Class1 {
public static void ThisWontCompile() {
ActionSurrogate b = (a) =>
{
a(); // Error given here
};
}
}
I get a compiler error 'Delegate 'Action' does not take 0 arguments.' at the indicated position using the (Microsoft) C# 4.0 compiler. Note that you have to declare ActionSurrogate in a different project for this error to manifest.
It gets more interesting:
// Project A, File 1
public static class Class1 {
public static void ThisWontCompile() {
ActionSurrogate b = (a) => { a(); /* Error given here */ };
ActionSurrogate c = (a) => { a(); /* Error given here too */ };
Action d = () => { };
ActionSurrogate c = (a) => { a(); /* No error is given here */ };
}
}
Did I stumble upon a C# compiler bug here?
Note that this is a pretty annoying bug for someone who likes using lambdas a lot and is trying to create a data structures library for future use... (me)
EDIT: removed erronous case.
I copied and stripped my original project down to the minimum to make this happen. This is literally all the code in my new project.
FINAL UPDATE:
The bug has been fixed in C# 5. Apologies again for the inconvenience, and thanks for the report.
Original analysis:
I can reproduce the problem with the command-line compiler. It certainly looks like a bug. It's probably my fault; sorry about that. (I wrote all of the lambda-to-delegate conversion checking code.)
I'm in a coffee shop right now and I don't have access to the compiler sources from here. I'll try to find some time to reproduce this in the debug build tomorrow and see if I can work out what's going on. If I don't find the time, I'll be out of the office until after Christmas.
Your observation that introducing a variable of type Action causes the problem to disappear is extremely interesting. The compiler maintains many caches for both performance reasons and for analysis required by the language specification. Lambdas and local variables in particular have lots of complex caching logic. I'd be willing to bet as much as a dollar that some cache is being initialized or filled in wrong here, and that the use of the local variable fills in the right value in the cache.
Thanks for the report!
UPDATE: I am now on the bus and it just came to me; I think I know exactly what is wrong. The compiler is lazy, particularly when dealing with types that came from metadata. The reason is that there could be hundreds of thousands of types in the referenced assemblies and there is no need to load information about all of them. You're going to use far less than 1% of them probably, so let's not waste a lot of time and memory loading stuff you're never going to use. In fact the laziness goes deeper than that; a type passes through several "stages" before it can be used. First its name is known, then its base type, then whether its base type hierarchy is well-founded (acyclic, etc), then its type parameter constraints, then its members, then whether the members are well-founded (that overrides override something of the same signature, and so on.) I'll bet that the conversion logic is failing to call the method that says "make sure the types of all the delegate parameters have their members known", before it checks the signature of the delegate invoke for compatibility. But the code that makes a local variable probably does do that. I think that during the conversion checking, the Action type might not even have an invoke method as far as the compiler is concerned.
We'll find out shortly.
UPDATE: My psychic powers are strong this morning. When overload resolution attempts to determine if there is an "Invoke" method of the delegate type that takes zero arguments, it finds zero Invoke methods to choose from. We should be ensuring that the delegate type metadata is fully loaded before we do overload resolution. How strange that this has gone unnoticed this long; it repros in C# 3.0. Of course it does not repro in C# 2.0 simply because there were no lambdas; anonymous methods in C# 2.0 require you to state the type explicitly, which creates a local, which we know loads the metadata. But I would imagine that the root cause of the bug - that overload resolution does not force loading metadata for the invoke - goes back to C# 1.0.
Anyway, fascinating bug, thanks for the report. Obviously you've got a workaround. I'll have QA track it from here and we'll try to get it fixed for C# 5. (We have missed the window for Service Pack 1, which is already in beta.)
This probably is a problem with type inference, apperently the compiler infers a as an Action<T> instead of Action (it might think a is ActionSurrogate, which would fit the Action<Action>> signature). Try specifying the type of a explicitly:
ActionSurrogate b = (Action a) =>
{
a();
};
If this is not the case - might check around your project for any self defined Action delegates taking one parameter.
public static void ThisWontCompile()
{
ActionSurrogate b = (Action a) =>
{
a();
};
}
This will compile. Some glitch with the compiler its unable to find the Action delegate without parameters. That's why you are getting the error.
public delegate void Action();
public delegate void Action<T>();
public delegate void Action<T1,T2>();
public delegate void Action<T1,T2,T3>();
public delegate void Action<T1,T2,T3,T4>();

access private method in a different assembly c#

This may be a daft question as I can see the security reason for it to happen the way it does...
I have a licensing c# project, this has a class which has a method which generates my license keys. I have made this method private as I do not want anybody else to be able to call my method for obvious reasons
The next thing I want to do is to have my user interface, which is in another c# project which is referencing the licensing dll to be the only other 'thing' which can access this method outside of itself, is this possible or do i need to move it into the same project so that it all compiles to the same dll and I can access its members?
LicensingProject
-LicensingClass
--Private MethodX (GeneratesLicenseKeys)
LicensingProject.UI
-LicensingUiClass
--I want to be able to be the only class to be able to access MethodX
There is a reason why the license Key Generator is not just in the UI, that is because the licensing works by generating a hash on itself and compares it to the one generated by the License Generator.
I would prefer not to all compile to the dll as my end users do not need the UI code.
I know that by common sense a private method, is just that. I am stumped.
You could make it an internal method, and use InternalsVisibleToAttribute to give LicensingProject.UI extra access to LicensingProject.
Merhdad's point about enforcement is right and wrong at the same time. If you don't have ReflectionPermission, the CLR will stop you from calling things you shouldn't - but if you're using reflection from a fully trusted assembly, you can call anything. You should assume that a potential hacker is able to run a fully trusted assembly on his own machine :)
None of this will stop someone from using Reflector to decompile your code. In other words, making it private isn't really adding a significant amount of security to your licensing scheme. If anyone actually puts any effort into breaking it, they'll probably be able to.
This is really a comment, in response to Mehrdad's point about the runtime not performing access checks; here, you can see the JIT (it transpires) performing the access check - not reflection, and not the C# compiler.
To fix the code, make Foo.Bar public. Interestingly, it also verifies that Foo is accessible - so make Foo internal to see more fireworks:
using System;
using System.Reflection;
using System.Reflection.Emit;
static class Program {
static void Main() {
MethodInfo bar = typeof(Foo).GetMethod("Bar",
BindingFlags.Instance | BindingFlags.Public | BindingFlags.NonPublic);
var method = new DynamicMethod("FooBar", null, new[] {typeof(Foo)});
var il = method.GetILGenerator();
il.Emit(OpCodes.Ldarg_0);
il.EmitCall(OpCodes.Callvirt, bar, null);
il.Emit(OpCodes.Ret);
Action<Foo> action = (Action<Foo>) method.CreateDelegate(typeof(Action<Foo>));
Foo foo = new Foo();
Console.WriteLine("Created method etc");
action(foo); // MethodAccessException
}
}
public class Foo {
private void Bar() {
Console.WriteLine("hi");
}
}
public, private, ... stuff are just enforced by the compiler. You can use reflection to access them pretty easily (assuming the code has required permissions, which is a reasonable assumption as he has complete control on the machine). Don't rely on that assuming nobody can call it.
Foo.Bar may stay private...
To fix the code above, add one parameter at end of DynamicMethod constructor:
var method = new DynamicMethod("FooBar", null, new[] {typeof(Foo)}, true);
Add true to skip JIT visibility checks on types and members accessed by the MSIL of the dynamic method.

Categories

Resources