Using c# to call a function from another process - c#

I'm creating a memory modifying program for my own learning purposes. A friend of mine pointed out a function in another program that I want to trigger.
The function is at 0x004B459C in the other program. I know how to read and write memory, but how can I trigger this function from my program. I do not have the source to this other program.
My question is do I need to inject the function if I know this hex code, or do I just write something to memory to trigger this?

Think a bit about what you really want. You want the other process to execute this function. Processes don't execute code, it's threads that execute code. If you want the other process to call this function as a part of it's normal operations, you will have to figure out inputs etc. which will make one of the other process's threads call it. Generally speaking, any other way you will be running the risk of corrupting the other process. It is possible to inject a thread into another process and have it call the function you're interested in (see CreateRemoteThread). If this function is intended to be called on the message pump thread, you could inject a message hook into the other process, send it a special message and call it from your hook. There are a few more ways (APC) but these are still more complicated for little gain.

you are missing some basic architecture fundamentals :-) you cannot simply call a function knowing its address from another process! think of it, this means that your program can get the memory of any program and execute code! this will be a mess and a complete insecure environment. first some basics:
1) windows guarantees that you only see the memory of your own process, one of the most important principles of an OS (even Windows) is to isolate processes including their memory of course.
2) did think about permissions, usually any code that runs must run under a user account, another process might mean another process account.
the answer is simple, if your program is .NET/C# then check what the .NET framework provides you for inter process communication, this is the thing you must search for, every platform, Java, windows native, .NET provides an offical way how process communicate with each other, it is called interprocess communication, check it in .NET framework

Related

How to approach removing needless instantiation when calling native method from WebAPI

I have a dotnet core WebAPI web server that needs to execute a native method written in Win32 C++. The problem is, each time this method is called it needs to instantiate a bunch of things before it can do what it needs to do, this adds delays to the request. (It's currently using DLLImport to access the C++ method in the compiled DLL).
What I would like to do is have some sort of long running process start when the server starts, which will handle the initialization once, then have my WebAPI service call a method inside this process that executes the code that I actually need to run immediately, without the need to initialize its dependencies each time. Since this is a web server, the process will need to be able to handle multiple requests at once.
What is the recommended approach for this? I have full access to the C++ code and the WebAPI server code so I'm free to do whatever needs to be done to accomplish this.
You may set-up some IPC infrastructure between the two.
One way to do it would be to make your DLL COM compatible. I.e having the DLL be a COM server to some COM class. The server process would then 'CreateInstance' a class, which will automatically launch your native process. A call would then just be a normal function call, COM will handle the RPC.
Another simpler way will be using a named memory-mapped file. Both processes will open a handle to this, there you can store a queue or some data structure. The server process will push while the native process will pop. You can use windows events to synchronize this. You can write this yourself or use something like boost::interprocess for the C++ part. I assume there may be other IPC libraries you may find for this.
You can also use a Pipe, I know C# has some easy ways to handle windows pipes. Pipes do not need synchronization but to efficiently handle a number of such requests you may need a number of threads on the native process to read from the pipe.
Personally i'd go with using COM if that is possible. As it will hide for you the low-level IPC stuff that may be a pit-fall. It is a bit longer to set-up though.

C# What exactly is application domain?

I understand that an application domain forms:
an isolation boundary for security,
versioning,
reliability,
and unloading of managed code,
but so does a process
Can someone please help me understand the practical benefits of an application domain?
I assumed app domain provides you a container to load one version of an assembly but recently I discovered that multiple versions of strong key assembly can be loaded in an app domain.
My concept of application domain is still not clear. And I am struggling to understand why this concept was implemented when the concept of process is present.
Thank you.
I can't tell if you are talking in general or specifically .NET's AppDomain.
I am going to assume .NET's AppDomain and why it can be really useful when you need that isolation inside of a single process.
For instance:
Say you are dealing with a library that had certain worker classes and you have no choice, but to use those workers and can't modify the code. It's your job to build a Windows Service that manages said workers and makes sure they all stay up and running and need to work in parallel.
Easy enough right? Well, you hoped. It turns out your worker library is prone to throwing exceptions, uses a static configuration, and is generally just a real PITA.
You could try to launch them in their own process, but monitor them, you'll need to implement namedpipes or try to thoughtfully parse the STDIN and STDOUT of the process.
What else can you do? Well AppDomain actually solves this. I can spawn an AppDomain for each worker, give them their own configuration, they can't screw each other up by changing static properties because they are isolated, and on top of that, if the library bombs out and I failed to catch the exception, it doesn't bother the workers in their domain. And during all of this, I can still communicate with those workers easily.
Sadly, I have had to do this before
EDIT: Started to write this as a comment response, but got too large
Individual processes can work great in many scenarios, however, there are just times where they can become a pain. I am not saying one should use an AppDomain over another process. I think it's uncommon you would need a separate process or AppDomain, but once you need it, you'll definitely know.
The main problem I see with processes in the scenario I've given above is that processes have their own downfalls that are easier to mitigate with the AppDomain.
A process can go rogue, become unresponsive, and crash or be killed at any point.
If you're managing processes, you need to keep track of the process ID and monitor the status of it. IPCs are great, but it does take time to get proper communication going back and forth as needed.
As an example let's say your process just dies. What do you do? Depending on the mechanism you chose to monitor, maybe the communication thread died, perhaps the work finished and you still show it as "processing". What do you do?
Now what happens when you have 20 processes and your management app dies. You don't have any real information, all you have is 20 "myprocess.exe" and maybe now have to start parsing the command line arguments they were started with to see which workers you actually have. Obviously with an AppDomain all 20 would have died too, but did you really gain anything with the process? You still have to code the ability to recover, however, now you have to also code all of the recovery for your processes instead of just firing the workers back up.
As with anything in programming, there's 1,000 different ways to achieve the same goal. It's up to you to decide which solution you feel is most appropriate.
Some practical benefits using app domain:
Multiple app domains can be run in a process. You can also stop individual app domain without stopping the entire process. This alone drastically increases the server scalability.
Managing app domain life cycle is done programmatically by runtime hosts (you can override it as well). For processes & threads, you have to explicitly manage their life cycle. Initialization, execution, termination, inter-process/multithread communication is complex and that's why it's easier to defer that to CLR management.
Source: https://learn.microsoft.com/en-us/dotnet/framework/app-domains/application-domains

How to make system calls counter using deviare

I would like to hook all the functions calls of all running processes. I can hook certain function("ws2_32.dll!recv") of all processes using deviare by:
CreateSpyMgr(out mgr);
hook = mgr.CreateHook("ws2_32.dll!recv");
hook.Attach(mgr.get_Processes(0));
mgr.set_ReportProcessCreation(DeviareCommonLib.ReportMethod._create_process_hook_and_polling, 0);
hook.set_HookNewProcesses(0, 1);
hook.OnFunctionCalled += new DHookEvents_OnFunctionCalledEventHandler(hook_OnFunctionCalled);
hook.Hook();
How can I hook all function calls instead of just one? is it possible?
Or should I create hooks collection(of all functions which is way hard) using INktSpyMgr::CreateHooksCollection and add hooks to it, then call hook method and pass the INktHooksEnum object as the parameter. Is this the only way to do this?
My aim is to make a tool that counts all system calls for each running process. Feel free to give any suggestions.
First a word of advice: be very very careful about which APIs you hook. If anything you do within your hook method results in a call to one of the APIs you are hooking then you are creating an infinite recursion that could potentially wreck your computer. Bear that in mind. You'll probably want to filter out the API calls for your own process as well, otherwise you'll end up logging entries about the disk access caused by logging entries, and before you know it your memory is full and the hard drive is fully occupied with logging about logging.
There appears to be nothing in the Deviare API that allows you to create hooks on multiple methods - no wildcards or 'hook everything' calls - so you'll have to enumerate the APIs (see INktModule.ExportedFunctions for some ideas) and hook them. I'd suggest that you use a hook collection (see INktSpyMgr.CreateHookCollection and INktHooksEnum) so that you can setup all your hooks and then attach and detach them in one operation.
As for the logging aspect, give some thought to using a queue of some sort - ConcurrentQueue<T> by preference - to pass the actual logging operations off to another thread. That way you spend a minimum of time in the actual hook function as well as reducing the chances of your hooks causing recursion. You'll have to experiment with filtering in the logging thread vs the hook functions to find out which has the smaller performance impact on the system.
Always make sure you know how much data your program is dealing with and have a plan in place for dealing with the volume of data. You're going to have to do some serious profiling to find the pain points, then put in plenty of work on reducing the overheads so that your program doesn't mess up the system too badly.
Personally I'd start with a small subset of the APIs you ultimately want to monitor, write code that works as well as you can make it, then move up to the full set of APIs. Less chance that you'll kill your computer that way.

Efficiently streaming data across process boundaries in .NET

I've been working on an internal developer tool on and off for a few weeks now, but I'm running into an ugly stumbling block I haven't managed to find a good solution for. I'm hoping someone can offer some ideas or guidance on the best ways to use the existing frameworks in .NET.
Background: the purpose of this tool is to load multiple different types of log files (Windows Event Log, IIS, SQL trace, etc.) to the same database table so they can be sorted and examined together. My personal goal is to make the entire thing streamlined so that we only make a single pass and do not cache the entire log either in memory or to disk. This is important when log files reach hundreds of MB or into the GB range. Fast performance is good, but slow and unobtrusive (allowing you to work on something else in the meantime) is better than running faster but monopolizing the system in the process, so I've focused on minimizing RAM and disk usage.
I've iterated through a few different designs so far trying to boil it down to something simple. I want the core of the log parser--the part that has to interact with any outside library or file to actually read the data--to be as simple as possible and conform to a standard interface, so that adding support for a new format is as easy as possible. Currently, the parse method returns an IEnumerable<Item> where Item is a custom struct, and I use yield return to minimize the amount of buffering.
However, we quickly run into some ugly constraints: the libraries provided (generally by Microsoft) to process these file formats. The biggest and ugliest problem: one of these libraries only works in 64-bit. Another one (Microsoft.SqlServer.Management.Trace TraceFile for SSMS logs) only works in 32-bit. As we all know, you can't mix and match 32- and 64-bit code. Since the entire point of this exercise is to have one utility that can handle any format, we need to have a separate child process (which in this case is handling the 32-bit-only portion).
The end result is that I need the 64-bit main process to start up a 32-bit child, provide it with the information needed to parse the log file, and stream the data back in some way that doesn't require buffering the entire contents to memory or disk. At first I tried using stdout, but that fell apart with any significant amount of data. I've tried using WCF, but it's really not designed to handle the "service" being a child of the "client", and it's difficult to get them synchronized backwards from how they want to work, plus I don't know if I can actually make them stream data correctly. I don't want to use a mechanism that opens up unsecured network ports or that could accidentally crosstalk if someone runs more than one instance (I want that scenario to work normally--each 64-bit main process would spawn and run its own child). Ideally, I want the core of the parser running in the 32-bit child to look the same as the core of a parser running in the 64-bit parent, but I don't know if it's even possible to continue using yield return, even with some wrapper in place to help manage the IPC. Is there any existing framework in .NET that makes this relatively easy?
WCF does have a P2P mode however if all your processes are local machine you are better off with IPC such as named pipes due to the latter running in Kernel Mode and does not have the messaging overhead of the former.
Failing that you could try COM which should not have a problem talking between 32 and 64 bit processes. - Tell me more
In case anyone stumbles across this, I'll post the solution that we eventually settled on. The key was to redefine the inter-process WCF service interface to be different from the intra-process IEnumerable interface. Instead of attempting to yield return across process boundaries, we stuck a proxy layer in between that uses an enumerator, so we can call a "give me an item" method over and over again. It's likely this has more performance overhead than a true streaming solution, since there's a method call for every item, but it does seem to get the job done, and it doesn't leak or consume memory.
We did follow Micky's suggestion of using named pipes, but still within WCF. We're also using named semaphores to coordinate the two processes, so we don't attempt to make service calls until the "child service" has finished starting up.

Looking to write my own 'Application Whitelisting Tool' Something like Bit9?

Playing around with project ideas that I might actually use, figured I might try to write my own simple version of Bit9 Parity, in either C# or Python. My question is what is the best way to go about doing this. I've googled .Net functionality for prevent processes from executing, but I havn't really found what I'm looking for. What I'd like to do is monitory the system memory as a whole, and deny any process or application from starting unless specifically identified in a list. ProcessWatcher caught my eye, but is that not for a specific process ID. How do I block ALL other processes from starting? Is this possible in .Net? What about python?
This blog post (Using WMI to monitor process creation, deletion and modification in .NET) shows how to do that. With a few changes, you should be able to do exactly what you want.
How do I block ALL other processes from starting?
Deep, mysterious OS API magic. After all, you're interfering with how the OS works. You must, therefore patch or hook into the OS itself.
Is this possible in .Net? What about python?
It doesn't involve time-travel, anti-gravity or perpetual motion. It can be done.
It's a matter of figuring out (1) which OS API calls are required to put your new hook into the OS, and (2) implementing a call from the OS to your code.
Is really hard.
Is really easy.

Categories

Resources