FatalExecutionEngineError when calling function from another assembly - c#

My question is why the described solution to problem below works.
Intro: I have a small c++ server application, that does its business logic in c#. I use DllExport for the (reverse) pinvoke. All is peachy, everything is fast, stable, no leaks or any issue.
C# is composed of several assemblies(all is 4.7.2, x64 as is native), but i keep exports only in one.
Problem: One part of development led me to call a trivial utility public static function from another assembly that only contains a single file with single static class with about 15 helper functions and a single interface
public interface ISerializable
{
BinaryWriter serial(BinaryWriter w);
//ISerializable serial(BinaryReader w);//itself
}
When debugger hits the function, it immediately threw null pointer exception and when i hit continue, it threw FatalExecutionEngineError. The exception happens as the function is called, before anything else happens. When looking at assembly debugger, it happens on the CALL instruction. Again, there were no stack issues or anything untoward.
Things i tried:
Being paranoid about eliminating mistakes on my part, i created an clean call pinvoke that internally only called a simple test function doing nothing from the same class.
It crashed immediately.
When i moved this test function to new static class in the same assembly, it worked....
What worked: Out of desperation, i moved the interface to a separate static class and everything started to work.
Obviously, this is a cargo cult coding and i would like to understand how to debug the issue. How in theory could the dllexport/pinvoke cause this this bug? Anyone seen anything similar?

Related

How does C# know when to run a static constructor?

I don't believe the generated code would check if the class has been initialized everytime it access a static member (which includes functions). I believe checking every access would be inefficient. I looked at ยง17.11 in ECMA 334 and it says
The execution of a static constructor is triggered by the first of
the following events to occur within an application domain:
An instance of the class is created.
Any of the static members of the class are referenced.
It looks like how to figure out when 'first' happens is not defined. I can't think of any way to do it but to check every time. How might it be done?
When you have a problem to solve, a good technique is: solve an even harder problem, such that the solution of your small problem is solved by the solution of the harder problem.
The CLR has a much harder problem to solve: it has to run the jitter exactly once on every method right before the method is called for the first time. If the CLR can solve that problem, then it can obviously solve the comparatively trivial sub-problem of detecting when a static ctor needs to run.
Perhaps your question should then be "how does the jitter know when to jit a method for the first time?"
When you are generating code at runtime, you have lots of options. You can call a NULL function pointer, catch the access violation, run the static constructor, compile the property getter, update the function pointer, and continue. Or have the property getter call a helper function that runs the static constructor and rewrites the getter code without the helper function call. Or insert a check on every static member access, that when hit recompiles the calling function with the check removed.

What exactly happens during a "managed-to-native transition"?

I understand that the CLR needs to do marshaling in some cases, but let's say I have:
using System.Runtime.InteropServices;
using System.Security;
[SuppressUnmanagedCodeSecurity]
static class Program
{
[DllImport("kernel32.dll", SetLastError = false)]
static extern int GetVersion();
static void Main()
{
for (; ; )
GetVersion();
}
}
When I break into this program with a debugger, I always see:
Given that there is no marshaling that needs to be done (right?), could someone please explain what's actually happening in this "managed-to-native transition", and why it is necessary?
First the call stack needs to be set up so that a STDCALL can happen. This is the calling convention for Win32.
Next the runtime will push a so called execution frame. There are many different types of frames: security asserts, GC protected regions, native code calls, ...
The runtime uses such a frame to track that currently native code is running. This has implications for a potentially concurrent garbage collection and probably other stuff. It also helps the debugger.
So not a lot is happening here actually. It is a pretty slim code path.
Besides the marshaling layer, which is responsible for converting parameters for you and figuring out calling conventions, the runtime needs to do a few other things to keep internal state consistent.
The security context needs to be checked, to make sure the calling code is allowed to access native methods. The current managed stack frame needs to be saved, so that the runtime can do a stack walk back for things like debugging and exception handling (not to mention native code that calls into a managed callback). Internal bits of state need to be set to indicate that we're currently running native code.
Additionally, registers may need to be saved, depending on what needs to be tracked and which are guaranteed to be restored by the calling convention. GC roots that are in registers (locals) might need to be marked in some way so that they don't get garbage collected during the native method.
So mainly it's stack handling and type marshaling, with some security stuff thrown in. Though it's not a huge amount of stuff, it will represent a significant barrier against calling smaller native methods. For example, trying to P/Invoke into an optimized math library rarely results in a performance win, since the overhead is enough to negate any of the potential benefits. Some performance profiling results are discussed here.
I realise that this has been answered, but I'm surprised that no one has suggested that you show the external code in the debug window. If you right click on the [Native to Managed Transition] line and tick the Show External Code option, you will see exactly which .NET methods are being called in the transition. This may give you a better idea. Here is an example:
I can't really see much that'd be necessary to do. I suspect that it is mainly informative, to indicate to you that part of your call stack shows native functions, and also to indicate that the IDE and debugger may behave differently across that transition (since managed code is handled very differently in the debugger, and some features you expect may not work)
But I guess you should be able to find out simply by inspecting the disassembly around the transition. See if it does anything unusual.
Since you are calling a dll. it needs to go out of the managed environment. It is going into windows core. You are breaking the .net barrier and going into windows code that doesn't run the same as .NET.

Weird crash when debugging COM object destructor

My application is a mix of C# and C++ code. Startup module written in C# loads during initialization phase C++ module through COM (Component Object Model) mechanism. All was functioning correctly until I decided to add to C# part a wcf service. All wcf service calls are routed to C++ code using COM. After adding some new methods I noticed memory leaks in output window. So I added breakpoint to desctructor of C++ class as can be seen from screenshot. From this point on weird things started to happen. After program reaches breakpoint it unexpectedly crashes. First weird thing is that when I run program without breakpoint being set it ends graciously. Second weird thing is that the way program crashes is as if it were running without debugger. After clicking on button "Open in debugger" (or something like this) I get error message: "Program is already opened under debugger." None message in output window that could point me to the source of the error, none suspicious code.
When adding message box to destructor beginning it displays for fraction of second and then whole application closes (without adding user opportunity to read whats displayed in message box). Desperately searching for any clue.
P.S. Problems occurs only when wcf method was called at least once. Doesn't depend if program flow in this particular call was routed to C++ level or not.
When calling C# from C++ sometimes the garbage collector doesn't properly get called before program end. Try forcing garbage collection at the end of your C# code.
Resolved by following code:
public void Dispose()
{
Marshal.Release(internal_interface_ptr);
internal_interface_ptr = IntPtr.Zero;
Marshal.ReleaseComObject(internal_interface);
Marshal.ReleaseComObject(internal_interface);
internal_interface = null;
}
Beside this one other reference was hanging in C++ code. So to make conclusion, main mistake on my part was forgetting to explicitly release COM object in C# code. Even if garbage collector takes task of managing memory this isn't true for modules written in other programming languages. COM destructor was called very lately when particular dynamic linked library was to be unloaded from memory and this caused problems. Hope I explained it sufficient clearly.

Diagnose/Debug potential stack corruption .NET application

I think I have a curly one here... I have an WinForms application that crashes fairly regularly every hour or so when running as an x64 process. I suspect this is due to stack corruption and would like to know if anyone has seen a similar issue or has some advice for diagnosing and detecting the issue.
The program in question has no visible UI. It's just a message window that sits in the background and acts as a sort of 'middleware' between our other client programs and a server.
It dies in different ways on different machines. Sometimes it's an 'APPCRASH' dialog that reports a fault in ntdll.dll. Sometimes it's an 'APPCRASH' that reports our own dll as the culprit. Sometimes it's just a silent death. Sometimes our unhandled exception hook logs the error, sometimes it doesn't.
In the cases where Windows Error Reporting kicks in, I've examined memory dumps from several different crash scenarios and found the same Managed exception in memory each time. This is the same exception I see reported as an unhandled exception in the cases where we it logs before it dies.
I've also been lucky (?) enough to have the application crash while I was actively debugging with Visual Studio - and saw that same exception take down the program.
Now here's the kicker. This particular exception was thrown, caught and swallowed in the first few seconds of the program's life. I have verified this with additional trace logging and I have taken memory dumps of the application a couple of minutes after application startup and verified that exception is still sitting there in the heap somewhere. I've also run a memory profiler over the application and used that to verify that no other .NET object had a reference to it.
The code in question looks a bit like this (vastly simplified, but maintains the key points of flow control)
public class AClass
{
public object FindAThing(string key)
{
object retVal = null;
Collection<Place> places= GetPlaces();
foreach (Place place in places)
{
try
{
retval = place.FindThing(key);
break;
}
catch {} // Guaranteed to only be a 'NotFound' exception
}
return retval;
}
}
public class Place
{
public object FindThing(string key)
{
bool found = InternalContains(key); // <snip> some complex if/else logic
if (code == success)
return InternalFetch(key);
throw new NotFoundException(/*UsefulInfo*/);
}
}
The stack trace I see, both in the event log and when looking at the heap with windbg looks a bit like this.
Company.NotFoundException:
Place.FindThing()
AClass.FindAThing()
Now... to me that reeks of something like stack corruption. The exception is thrown and caught while the application is starting up. But the pointer to it survives on the stack for an hour or more, like a bullet in the brain, and then suddenly breaches a crucial artery, and the application dies in a puddle.
Extra clues:
The code within 'InternalFetch' uses some Marshal.[Alloc/Free]CoTask and pinvoke code. I have run FxCop over it looking for portability issues, and found nothing.
This particular manifestation of the issue is only affecting x64 code built in release mode (with code optimization on). The code I listed for the 'Place.Find' method reflects the optimized .NET code. The unoptimized code returns the found object as the last statement, not 'throw exception'.
We make some COM calls during startup before the above code is run... and in a scenario where the above problem is going to manifest, the very first COM call fails. (Exception is caught and swallowed). I have commented out that particular COM call, and it does not stop the exception sticking around on the heap.
The problem might also affect 32 bit systems, but if it does - then the problem does not manifest in the same spot. I was only sent (typical users!) a few pixels worth of a screen shot of an 'APP CRASH' dialog, but the one thing I could make out was 'StackHash_2264' in the faulting module field.
EDIT:
Breakthrough!
I have narrowed down the problem to a particular call to SetTimer.
The pInvoke looks like this:
[DllImport("user32")]
internal static extern IntPtr SetTimer(IntPtr hwnd, IntPtr nIDEvent, int uElapse, TimerProc CB);
internal delegate void TimerProc(IntPtr hWnd, uint nMsg, IntPtr nIDEvent, int dwTime);
There is a particular class that starts a timer in its constructor. Any timers set before that object is constructed work. Any timers set after that object is constructed work. Any timer set during that constructor causes the application to crash, more often than not. (I have a laptop that crashes maybe 95% of the time, but my desktop only crashes 10% of the time).
Whether the interval is set to 1 hour, or 1 second, seems to make no different. The application dies when the timer is due - usually by throwing some previously handled exception as described above. The callback does not actually get executed. If I set the same timer on the very next line of managed code after the constructor returns - all is fine and happy.
I have had a debugger attached when the bad timer was about to fire, and it caused an access violation in 'DispatchMessage'. The timer callback was never called. I have enabled the MDAs that relate to managed callbacks being garbage collected, and it isn't triggering. I have examined the objects with sos and verified that the callback still existed in memory, and that the address it pointed to was the correct callback function.
If I run '!analyze -v' at this point, it usually (but not always) reports something along the lines of 'ERROR_SXS_CORRUPT_ACTIVATION_STACK'
Replacing the call to SetTimer with Microsoft's 'System.Windows.Forms.Timer' class also stops the crash. I've used a Reflector on the class and can see internally it still calls SetTimer - but does not register a procedure. Instead it has a native window that receives the callback. It's pInvoke definition actually looks wrong... it uses 'ints' for the eventId, where MSDN documentation says it should be a UIntPtr.
Our own code originally also used 'int' for nIDEvent rather than IntPtr - I changed it during the course of this investigation - but the crash continued both before and after this declaration change. So the only real difference that I can see is that we are registering a callback, and the Windows class is not.
So... at this stage I can 'fix' the problem by shuffing one particular call to SetTimer to a slightly different spot. But I am still no closer to actually understanding what is so special about starting the timer within that constructor that causes this error. And I dearly would like to understand the root cause of this issue.
Just briefly thinking about it it sounds like an x64 interop issue (i.e., calling x32 native functions from x64 managed code is fraught with danger). Does the problem go away if you force your application to compile as x32 platform from within project properties?
You can read suggestions on forcing x32 compile during x32/x64 development on Dotnetrocks. Richard Campbell's suggestion is that Visual Studio should default to x32 platform and not AnyCPU.
http://www.dotnetrocks.com/default.aspx?showNum=341 (transcript).
With regard to advanced debugging, I have not had a chance to debug x64 interop code, but i hear that this book is an great resource: Advanced .NET Debugging.
Finally, one thing you might try is force Visual Studio to break when an exception is thrown.
Use something like DebugDiag for x64 or Windbg to write a dump on Kernel32!TerminateProcess and second chance exception on .NET which should give you the actual .excr context frame of the exception that occurred.
This should help you in identifying the call-stack for the process terminate.
IMO it could be mostly because of PInvoke calls. You could use Managed Debugging Assistants to debug these issues.
If MDA is used along with Windbg it would give out messages that would be helpful in debugging
Also I have found tools from the http://clrinterop.codeplex.com/ team are extremely handy when dealing with interop
EDIT
This should give an answer why it is not working in 64 bit Issue with callback method in SetTimer Windows API called from C# code .
This does sound like a corruption issue. I would go through all of your interop calls and ensure that all of the parameters to the DllImport'ed functions are the correct types. For exmaple, using an int in place of an IntPtr will work in 32 bit code but can crash 64 bit.
I would use a site like PInvoke.net to verify all of the signatures.

Calling a c# .dll from native visual c++ code

the system I'm working with consists of:
A front-end application written in most likely VB or else VC++ (don't know, don't and can't have the sources for it)
An unmanaged VC++ .dll
A C# .dll
The application calls the first dll, the first dll calls different methods from the second one.
In order to make the first dll able to see and call the C# code I followed this guide:
http://support.microsoft.com/kb/828736
The only difference is that i am not compiling with /clr:OldSyntax, if I do then changing the other dependant compiling options makes the first dll load incorrectly from the application.
Everything compiles smoothly; the whole setup even worked fine initially, however after completely developing my code across the two dlls I now get an error in the application. The error is:
Run-time error '-2147417848 (80010108)':
Automation Error
The object invoked has disconnected from its clients.
And occurs when the following line is executed in the first dll:
MyManagedInterfacePtr ptrName(__uuidof(MyManagedClass));
I tried reproducing a fully working setup but without success.
Any ideas on how the heck I managed to do it in the first place?
Or alternatively on other approaches for making the two dlls work together?
Thanks in advance!
It is a low-level COM error, associated with RPC. That gets normally used in out-of-process servers, but that doesn't sound like your setup. It would also be used if you make calls on a COM interface from another thread. One possible cause is that the thread that created the COM object was allowed to exit, calling CoUninitialize and tearing down the COM object. A subsequent call made from another thread would generate this error. Getting reference counting wrong (calling Release too often) could cause this too.
Tackle this by carefully tracing which threads create a COM object and how long they survive.

Categories

Resources