How to Inject code in c# method calls from a separate app - c#

I was curious if anyone knew of a way of monitoring a .Net application's runtime info (what method is being called and such)
and injecting extra code to be run on certain methods from a separate running process.
say i have two applications:
app1.exe that for simplicity's sake could be
class Program
{
static void Main(string[] args)
{
while(true){
Somefunc();
}
}
static void Somefunc()
{
Console.WriteLine("Hello World");
}
}
and I have a second application that I wish to be able to detect when Somefunc() from application 1 is running and inject its own code,
class Program
{
static void Main(string[] args)
{
while(true){
if(App1.SomeFuncIsCalled)
InjectCode();
}
}
static void InjectCode()
{
App1.Console.WriteLine("Hello World Injected");
}
}
So The result would be Application one would show
Hello World
Hello World Injected
I understand its not going to be this simple ( By a long shot )
but I have no idea if it's even possible and if it is where to even start.
Any suggestions ?
I've seen similar done in java, But never in c#.
EDIT:
To clarify, the usage of this would be to add a plugin system to a .Net based game that I do not have access to the source code of.

It might be worth looking into Mono.Cecil for code injection. It won't work online the way you described in your question, but I believe it may do what you want offline (be able to find a given method, add code to it, and write out the modified assembly).
http://www.codeproject.com/KB/dotnet/MonoCecilChapter1.aspx

Well, to do that without App1's permission is difficult. But assuming you actually want to create an extension point in App1, it's relatively straightforward to do what you're suggesting with some kind of extensibility framework. I suggest:
SharpDevelop's add-in architecture
Mono.Addins
MS's Managed Extensibility Framework
Since I am more familiar with MEF, here's how it would look:
class Program
{
[ImportMany("AddinContractName", typeof(IRunMe))]
public IEnumerable<IRunMe> ThingsToRun { get; set; }
void SomeFunc()
{
foreach(IRunMe thing in ThingsToRun)
{
thing.Run();
}
/* do whatever else ... */
}
}

You need to use the Profiling API to make the second program profile the first one. You can then be notified of any method calls.

Another idea is to write an app that will change the exe you want to monitor. It would do things similar to what profiling tools do when they "instrument" your app. Basically, you use reflection to browse the app, then you re-create the exe (with a different file name) using the Emit features of .NET and insert your code at the same time.
Of course, if the app attempted to do things securely, this new version may not be allowed to communicate with its other assemblies.

With the clarifications you made in a comment, it seems to me you would be better off disassembling and reassembling using ildasm and ilasm.

Depending on how the author packaged/structured his application, you might be able to add a reference to his EXE file from your own project and just execute his code from within your own application. If that works, it's just a matter of using Reflection at that point to get at useful interfaces/events/etc that you can plug into (if they are marked private...otherwise just use them directly :). You can add a reference to any EXE built with the default settings for debug or release builds, and use it just like a DLL.
If that doesn't work there are advanced tricks you can use as well to swap out method calls and such, but that is beyond the scope of this reply...I suggest trying the reference thing first.

Injecting code into a running app is possible. This is a working example, but involves injection into an unmanaged dll.

Inject code in the assembly with Mono Cecil

You can try out CInject to inject your C# OR VB.NET code into assemblies

Related

C# Redirecting Method calls from other Projects to own Methods - How?

I am using Unity and want users to be able to create own programs that will be executed in the running project.
I want to create some kind of API that does the work within the project (like spawning an entity)
class Foo
{
void SpawnEntity()
{
//Code to spawn entity
}
}
Users should be able to submit their own code as .dll files with one method that will be executed
class Userclass
{
void MyCode()
{
//Their code would have to use some kind of API to do something in the game
Foo.SpawnEntity();
}
}
I thought about having something like
namespace APIS
{
class Foo
{
void SpawnEntity()
{
//This method does not contain actual code
}
}
}
in the user submitted .dll file, which would result in
class Userclass
{
void MyCode()
{
//Their code would have to use some kind of API to do something in the game
APIS.Foo.SpawnEntity();
}
}
My question is if I could overwrite the method APIS.Foo.SpawnEntity() with one that is already compiled in the code of the game like
namespace UnityEngine
{
class Foo
{
void SpawnEntity()
{
//The actual code that does something in the game
}
}
}
So far I only found examples of calling methods in external .dll files and getting their return value, but no examples of actually overwriting methods in those .dll files to change things in the calling program (the Unity game in this case).
Is this even possible with C#?
Thank you in advance
Edit: Probably impossible in C#, using scripting language
Standard approach: embed a scripting language.
There are multiple plugins available that provide this already for unity, in various languages from Lua to JS, even C# (although C# is tricky, because Apple has banned this kind of scriptng on iOS).
An external scripting language has several major benefits:
Your audience may not know the language you're using (C#); you can give them multiple options
They will only have access to the classes and methods you explicitly pass to the Scripting Language subsystem; this greatly reduces the chances of them messing up, and makes it easier for them (they have fewer API docs to read, etc)
If the scripting-system crashes, you can keep the game running, even reset the scripting system atuomatically while the game runs.
You can give the user performance info on their scripts, by measuring the scripting system from frame to frame.

Is it possible to execute an arbitrary class in C# with a Main method, similar to that in Java?

I've recently had to migrate to the C# world. Coming from the Java land, I could add a public static void main(String[] args) method to any class and select to run that class from Eclipse/Netbeans for any code/logic that I wanted to quickly test.
Is there an equivalent of the same capability in C#.Net/Visual Studio? I've tried doing that and the best I can do is to execute it from the command prompt via csc.exe. However, for some reason, it complains about not finding the relevant DLLs - it seems to expect to run that class in complete isolation without any dependency on "external code" (i.e., code residing in that VS project/solution where the class resides).
Reason for this capability: All project files are marked as class libraries and sometimes I just wish to check if a particular set of methods/data/logic will work as expected with the current code base. In Java, I'd quickly write it in the main method and execute that class to see how it goes prior to committing it to version control. However, there seems to be no easy way to trigger the execution of "my class" with all dependencies correctly handled by csc.exe
Current Solution: Add this testing code to the unit test project and select to execute that particular "test" so as to check if the idea seems to work fine (it may fire DB calls or webservice class etc., and not be purely a logical flow of computation). This seems to work fine and is my current way of doing things. I was wondering if the Main method was even possible/recommended.
Question: Is this even possible with C#/VS or not recommended?
Update: I can't add a console project just to achieve this since the addition of projects is tightly controlled by the source control team. Hence the question of the Main method 'hack' for quick and dirty checks/tests.
Your project type needs to be Console Application for it to "recognize" a Program.Main method, not Class Library. The intent is for a Class Library to be an encapsulated grouping of functionality that can only be accessed by a project that is set up to allow for user input. Those can be a Console Application, Web project (MVC/API), or Desktop (WPF).
If you just want to execute a test against the code within a Class Library project, you can also create a Unit Test project, add a reference and execute very explicit tests against the functionality you're looking to achieve.
You can find out the differences between the different project types by examining the .csproj files in your favorite text editor.
In Visual Studio go New->Project then select Console Application (in Windows\Classic Desktop in VS2015). This gives you a basic console application with...
static void Main(string[] args)
{
}
setup and ready to go. However for simply trying out code you may find this cumbersome (creating a new project and folder just to test code) and for testing code (that doesn't rely on existing libraries) you could use something like .NET Fiddle...
https://dotnetfiddle.net/
Where you can quickly create and test code there and run it via the browser.

Basic implementation of AOP like attribute using standard .NET Framework [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
C# wrap method via attributes
I'd like to achieve such functionality:
[Atomic]
public void Foo()
{
/* foo logic */
}
Where [Atomic] attribute is an attribute, which wraps function logic within a transaction scope:
using(var scope = new TransactionScope())
{
/* foo logic */
scope.Complete();
}
How to write such an attribute?
I've asked before basically the same question, I know this can be done using AOP, but I didn't mention I'm searching for some simplest proof of concept implementation or helpful articles which can help me to write this using pure .NET Framework (I suppose using RealProxy and MarshalByRefObject types, about which I've read browsing related questions).
I need to solve exactly this shown example. It seems like a basic thing so I want to learn how to do it starting from scratch. It doesn't need to be safe and flexible for now.
It seems like a basic thing...
It's one of the (many) things which are simple to understand the concept, but not at all simple to implement.
As per Oded's answer, Attributes in .NET don't do anything. They only exist so that other code (or developers) can look at them later on. Think of it as a fancy comment.
With that in mind, you can write your attribute like this
public class AtomicAttribute : Attribute { }
Now the hard part, you have to write some code to scan for that attribute, and change the behaviour of the code.
Given that C# is a compiled language, and given the rules of the .NET CLR there are theoretically 3 ways to do this
Hook into the C# compiler, and make it output different code when it sees that attribute.
This seems like it would be nice, but it is simply not possible
right now. Perhaps the
Roslyn
project might allow this in future, but for now, you can't do it.
Write something which will scan the .NET assembly after the C# compiler has converted it to MSIL, and change the MSIL.
This is basically what PostSharp does. Scanning and rewriting MSIL is hard. There are libraries such as Mono.Cecil which can help, but it's still a hugely difficult problem. It may also interfere with the debugger, etc.
Use the .NET Profiling API's to monitor the program while it is running, and every time you see a function call with that attribute, redirect it to some other wrapper function.
This is perhaps the simplest option (although it's still very difficult), but the drawback is that your program now must be run under the profiler. This may be fine on your development PC, but it will cause a huge problem if you try deploy it. Also, there is likely to be a large performance hit using this approach.
In my opinion, your best bet is to create a wrapper function which sets up the transaction, and then pass it a lambda which does the actual work. Like this:
public static class Ext
{
public static void Atomic(Action action)
{
using(var scope = new TransactionScope())
{
action();
scope.Commit();
}
}
}
.....
using static Ext; // as of VS2015
public void Foo()
{
Atomic(() => {
// foo logic
}
}
The fancy computer science term for this is Higher order programming
Attributes are meta data - that's all they are.
There are many tools that can take advantage of such metadata, but such tooling needs to be aware of the attribute.
AOP tools like PostSharp read such metadata in order to know what and where to weave aspects into code.
In short - just writing an AtomicAttribute will give you nothing - you will need to pass the compiled assembly through a tool that knows about this attribute and do "something" to it in order to achieve AOP.
It is not a basic thing at all. No extra code is run just because a method has an attribute, so there is nowhere to put your TransactionScope code.
What you would need to do is at application start-up use reflection to iterate over every method on every class in your assembly and find the methods that are marked with AtomicAttribute, then write a custom proxy around that object. Then somehow get everything else to call your proxy instead of the real implementation, perhaps using a dependency injection framework.
Most AOP frameworks do this at build time. PostSharp for example runs after VisualStudio builds your assembly. It scans your assembly and rewrites the IL code to include the proxies and AOP interceptors. This way the assembly is all set to go when it is run, but the IL has changed from what you originally wrote.
Maybe resolve all objects using IoC container?
You could configure interceptors for your types and in them check if called method is decorated with that attribute. You could cache that information so that you don't have to use reflection on every method call.
So when you do this:
var something = IoC.Resolve<ISomething>();
something is not object you have implemented but proxy. In that proxy you can do whatever you want before and after the method call.

Interface change between versions - how to manage?

Here's a rather unpleasant pickle that we got into on a client site. The client has about 100 workstations, on which we deployed version 1.0.0 of our product "MyApp".
Now, one of the things the product does is it loads up an add-in (call it "MyPlugIn", which it first looks for on a central server to see if there's a newer version, and if it is then it copies that file locally, then it loads up the add-in using Assembly.Load and invokes a certain known interface. This has been working well for several months.
Then the client wanted to install v1.0.1 of our product on some machines (but not all). That came with a new and updated version of MyPlugIn.
But then came the problem. There's a shared DLL, which is referenced by both MyApp and MyPlugIn, called MyDLL, which has a method MyClass.MyMethod. Between v1.0.0 and v1.0.1, the signature of MyClass.MyMethod changed (a parameter was added). And now the new version of MyPlugIn causes the v1.0.0 client apps to crash:
Method not found: MyClass.MyMethod(System.String)
The client pointedly does not want to deploy v1.0.1 on all client stations, being that the fix that was included in v1.0.1 was necessary only for a few workstations, and there is no need to roll it out to all clients. Sadly, we are not (yet) using ClickOnce or other mass-deployment utilities, so rolling out v1.0.1 will be a painful and otherwise unnecessary exercise.
Is there some way of writing the code in MyPlugin so that it will work equally well, irrespective of whether it's dealing with MyDLL v1.0.0 or v1.0.1? Perhaps there's some way of probing for an expected interface using reflection to see if it exists, before actually calling it?
EDIT: I should also mention - we have some pretty tight QA procedures. Since v1.0.1 has been officially released by QA, we are not allowed to make any changes to MyApp or MyDLL. The only freedom of movement we have is to change MyPlugin, which is custom code written specifically for this customer.
The thing is that the changes you made have to be basically in addition and not the change. So if you want to be back compatible in your deployment (as much as I understood in current deployment strategy you have this is an only option) you should never change the interface but add a new methods to it and avoid tight linking of your plugin with shared DLL, but load it dynamically. In this case
you will add a new funcionality without disturbing a old one
you will be able to choose which version of dll to load at runtime.
I have extracted this code from an application I wrote some time ago and removed some parts.
Many things are assumed here:
Location of MyDll.dll is the current directory
The Namespace to get reflection info is "MyDll.MyClass"
The class has a constructor without parameters.
You don't expect a return value
using System.Reflection;
private void CallPluginMethod(string param)
{
// Is MyDLL.Dll in current directory ???
// Probably it's better to call Assembly.GetExecutingAssembly().Location but....
string libToCheck = Path.Combine(Environment.CurrentDirectory, "MyDLL.dll");
Assembly a = Assembly.LoadFile(libToCheck);
string typeAssembly = "MyDll.MyClass"; // Is this namespace correct ???
Type c = a.GetType(typeAssembly);
// Get all method infos for public non static methods
MethodInfo[] miList = c.GetMethods(BindingFlags.Public|BindingFlags.Instance|BindingFlags.DeclaredOnly);
// Search the one required (could be optimized with Linq?)
foreach(MethodInfo mi in miList)
{
if(mi.Name == "MyMethod")
{
// Create a MyClass object supposing it has an empty constructor
ConstructorInfo clsConstructor = c.GetConstructor(Type.EmptyTypes);
object myClass = clsConstructor.Invoke(new object[]{});
// check how many parameters are required
if(mi.GetParameters().Length == 1)
// call the new interface
mi.Invoke(myClass, new object[]{param});
else
// call the old interface or give out an exception
mi.Invoke(myClass, null);
break;
}
}
}
What we do here:
Load dynamically the library and extract the type of MyClass.
Using the type, ask to the reflection subsystem the list of MethodInfo present in that type.
Check every method name to find the required one.
When the method is found build an instance of the type.
Get the number of parameters expected by the method.
Depending on the number of parameters call the right version using Invoke.
My team has made the same mistake you have more than once. We have a similar plugin architecture and the best advice I can give you in the long run is to change this architecture as soon as possible. This is a maintainability nightmare. The backwards compatibility matrix grows non-linearly with each release. Strict code reviews can provide some relief, but the problem is you always need to know when methods were added or changed to call them in the appropriate way. Unless both the developer and reviewer know exactly when a method was last changed you run the risk of there being a runtime exception when the method is not found. You can NEVER call a new method in MyDLL in the plugin safely, because you may run on a older client that does not have the newest MyDLL version with the methods.
For the time being, you can do something like this in MyPlugin:
static class MyClassWrapper
{
internal static void MyMethodWrapper(string name)
{
try
{
MyMethodWrapperImpl(name);
}
catch (MissingMethodException)
{
// do whatever you need to to make it work without the method.
// this may go as far as re-implementing my method.
}
}
private static void MyMethodWrapperImpl(string name)
{
MyClass.MyMethod(name);
}
}
If MyMethod is not static you can make a similar non-static wrapper.
As for long term changes, one thing you can do on your end is to give your plugins interfaces to communicate through. You cannot change the interfaces after release, but you can define new interfaces that the later versions of the plugin will use. Also, you cannot call static methods in MyDLL from MyPlugIn. If you can change things at the server level (I realize this may be outside your control), another option is to provide some sort of versioning support so that a new plugin can declare it doesn't work with an old client. Then the old client will only download the old version from the server, while newer clients download the new version.
Actually, it sounds like a bad idea to change the contract between releases. Being in an object-oriented environment, you should rather create a new contract, possibly inheriting from the old one.
public interface MyServiceV1 { }
public interface MyServiceV2 { }
Internally you make your engine to use the new interface and you provide an adapter to translate old objects to the new interface.
public class V1ToV2Adapter : MyServiceV2 {
public V1ToV2Adapter( MyServiceV1 ) { ... }
}
Upon loading an assembly, you scan it and:
when you find a class implementing the new interface, you use it directly
when you find a class implementing the old interface, you use the adapter over it
Using hacks (like testing the interface) will sooner or later bite you or anyone else using the contract - details of the hack have to be known to anyone relying on the interface which sounds terrible from the object-oriented perspective.
In MyDLL 1.0.1, deprecate the old MyClass.MyMethod(System.String)and overload it with the new version.
Could you overload MyMethod to accept MyMethod(string) ( version 1.0.0 compatible) and MyMethod(string, string) (v1.0.1 version)?
Given the circumstances, I think the only thing you can do really is have two versions of MyDLL running 'side by side',
and that means something like what Tigran suggested, loading the MyDLL dynamically - e.g. as an a side example not related but might help you, take a look at the the RedemptionLoader http://www.dimastr.com/redemption/security.htm#redemptionloader (that's for an Outlook plugins which often have problems crashing to each other referencing different versions of a helper dll, just as a background story - that's a bit more complex cause of the COM involved but doesn't change much here) -
it's what you can do, something similar. Load dynamically the dll by it's location, name - you can specify that location internally, hard-code, or even set it up from config or something (or check and do that if you see that MyDll is not of the right version),
and then 'wrap' the objects, calls form the dynamically loaded dll to match what you normally have - or do some trick like that (you'd have to wrap something or 'fork' on the implementation) to make everything work in both cases.
Also to add on the 'no-nos' and your QA sorrows :),
they should not break the backward compatibility from 1.0.0 to 1.0.1 - those are (usually) the minor changes, fixes - not breaking changes, major version # is needed for that.

if __name__ == "__main__": equivalent in C#

With python, I can use if __name__ == "__main__": for using the module both as a library and a program.
Can I mimic this feature in C#?
I see a class in C# can have a 'static void Main()', but I'm not sure if every class can have a Main() without a problem.
ADDED
/m:CLASS_NAME is a way to specify the class to run the Main().
You can put a Main method in as many classes as you like, although only one can be an entry point for an application. (For talks, I often have a main method in every class, and use a helper library to present all of those pseudo-entry-points when I run the project.)
Likewise you can definitely add a reference to a .exe assembly and treat it like a library. For example, you could make a unit testing assembly work like a class library in most cases, but also write a main method so that you could just run it to execute the tests without a GUI or whatever.
You can compile a C# project as a program (executable) with a Main() method, and you'd still be able to use it as a library. No special syntax required.
You could add a Main() method to every class, but I doubt it's useful.
.NET applications usually have different structures than Python ones; trying to fit the same programming model is unlikely to get you good results.
C# project files specify a startup object when multiple entry points are available.
See this article for more info.

Categories

Resources