How does Monitor.Enter work? [duplicate] - c#

I got a question how Monitor.Enter works. I investigated .net framework source code, and it shows this only:
[System.Security.SecurityCritical] // auto-generated
[ResourceExposure(ResourceScope.None)]
[MethodImplAttribute(MethodImplOptions.InternalCall)]
private static extern void ReliableEnter(Object obj, ref bool lockTaken);
I guess Monitor.Enter implementation is platform dependent, so I browsed Mono source code and I gave up :(
Yes, a critical section assigned for each System.Object instance may solve, but, I don't think the actual Monitor.Lock is written like this, because creating a critical section for each System.Object will cost unlimitedly. (Win32 does not allow billions of critical section objects in a process!)
Does anybody know how Monitor.Enter works? Please reply. Thanks in advance.

Every object in .NET has two extra (hidden—you can't see them) overhead members:
A "type object pointer". This is just a reference to the Type instance of the object. In fact, you can "access" this by calling GetType().
A "sync block index". This is a native WORD size integral type which is an index into the CLR's internal array of "Sync Blocks".
This is how it keeps track of which objects are locked.
The Sync Block structure contains a field that can be marked for locking. Basically, when you lock an object, this field is switched on. When the lock is released, it is switched off (basically - I haven't looked at the SSCLI for long enough to delve deeper into how that sort of operation works - I believe it is based on EnterCriticalSection though..).
The MethodImplOptions.InternalCall arguments you've passed to the attribute above means that the actual implementation of that method resides in the CLR.. which is why you can't browse any further through the code.

Looking at the Mono source code, it seems that they create a Semaphore (using CreateSemaphore or a similar platform-specific function) when the object is first locked, and store it in the object. There also appears to be some object pooling going on with the semaphores and their associated MonoThreadsSync structures.
The relevant function is static inline gint32 mono_monitor_try_enter_internal (MonoObject *obj, guint32 ms, gboolean allow_interruption) in the file mono/metadata/monitor.c, in case you're interested.
I expect that Microsoft .Net does something similar.

Microsoft .NET - whenever possible - tries a spinlock on the thin lock structure in the object header. (Notice how one can "lock" on any object.)
Only if there is a need for it will an event handle be used from the pool or a new one allocated.

Related

Android - Bitmap.CreateBitmap - null pointer exception

Sometimes when I am trying to create blurred bitmap I am getting "Null Pointer Exception".
Happens in this block of code (I recently started catching the exception so at least it doesn't crash the app):
try
{
using (Bitmap.Config config = Bitmap.Config.Rgb565) {
return Bitmap.CreateBitmap (blurredBitmap, width, height, config);
}
}
catch (Java.Lang.Exception exception)
{
Util.Log(exception.ToString());
}
Please refer to these pictures for more details about parameters I am passing into the "CreateBitmap" method:
Here is the expanded parameters:
Full exception:
exception {Java.Lang.NullPointerException: Exception of type
'Java.Lang.NullPointerException' was thrown. at
System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw ()
[0x0000b] in
/Users/builder/data/lanes/2058/58099c53/source/mono/mcs/class/corlib/System.Runtime.ExceptionServices/ExceptionDispatchInfo.cs:61
at Android.Runtime.JNIEnv.CallStaticObjectMethod (IntPtr jclass,
IntPtr jmethod, Android.Runtime.JValue* parms) [0x00064] in
/Users/builder/data/lanes/2058/58099c53/source/monodroid/src/Mono.Android/src/Runtime/JNIEnv.g.cs:1301
at Android.Graphics.Bitmap.CreateBitmap (System.Int32[] colors, Int32
width, Int32 height, Android.Graphics.Config config) [0x00088] in
/Users/builder/data/lanes/2058/58099c53/source/monodroid/src/Mono.Android/platforms/android-22/src/generated/Android.Graphics.Bitmap.cs:735
at Psonar.Apps.Droid.PayPerPlay.StackBlur.GetBlurredBitmap
(Android.Graphics.Bitmap original, Int32 radius) [0x00375] in
d:\Dev\psonar\Source\Psonar.Apps\Psonar.Apps.Droid\Psonar.Apps.Droid.PayPerPlay\Utilities\StackBlur.cs:123
--- End of managed exception stack trace --- java.lang.NullPointerException at
android.graphics.Bitmap.createBitmap(Bitmap.java:687) at
android.graphics.Bitmap.createBitmap(Bitmap.java:707) at
dalvik.system.NativeStart.run(Native Method)
} Java.Lang.NullPointerException
Not sure if this could be a bug in Xamarin or the passed parameters are wrong.
I got a reply from one of the members of the Xamarin team - Jonathan Pryor:
The NullPointerException is coming from Java code:
at android.graphics.Bitmap.createBitmap(Bitmap.java:687)
at android.graphics.Bitmap.createBitmap(Bitmap.java:707)
at dalvik.system.NativeStart.run(Native Method)
Looking quickly at a variety of releases, Jelly Bean might fit:
https://github.com/android/platform_frameworks_base/blob/jb-release/graphics/java/android/graphics/Bitmap.java#L687
return nativeCreate(colors, offset, stride, width, height,
config.nativeInt, false);
A quick glance at the surrounding method body shows that config
isn't checked for null-ness, so if null is passed this would result
in a NullPointerException.
The problem, though, is that you're not passing null:
using (Bitmap.Config config = Bitmap.Config.Rgb565) {
return Bitmap.CreateBitmap (blurredBitmap, width, height, config);
}
...or are you?
I would suggest that you remove the using block:
return Bitmap.CreateBitmap (blurredBitmap, width, height,
Bitmap.Config.Rgb565);
Here's what I think may be happening, but first, a digression:
Deep in the core of Xamarin.Android there is a mapping between Java
objects and their corresponding C# wrapper objects. Constructor
invocation, Java.Lang.Object.GetObject(), etc. will create
mappings; Java.lang.Object.Dispose() will remove mappings.
A core part of this is Object Identity: when a Java instance is
exposed to C# code and a C# wrapper is created, the same C# wrapper
instance should continue to be reused for that Java instance.
An implicit consequence of this is that any instance is effectively
global, because if multiple code paths/threads/etc. obtain a JNI handle to the same Java instance, they'll get the same C# wrapper.
Which brings us back to my hypothesis, and your code block:
Bitmap.Config is a Java enum, meaning each member is a Java object.
Furthermore, they're global values, so every thread has access to
those members, meaning the C# Bitmap.Config.Rgb565 instance is
effectively a global variable.
A global variable that you're Dispose()ing.
Which is "fine", in that the next time Bitmap.Config.Rgb565 is
access, a new wrapper will be created.
The problem, though, is if you have multiple threads accessing
Bitmap.Config.Rgb565 at the ~same time, each of which is trying to
Dispose() of an instance. At which point it's entirely plausible that
the two threads may reference the same wrapper instance, and the
Dispose() from one thread will thus INVALIDATE the instance used by
the other thread.
Which would result in null being passed to the Bitmap.createBitmap()
call, which is precisely what you're observing.
Please try removing the using block, and see if that helps matters.
The whole thread is accessible here.
Then I asked:
Jonathan Pryor - cheers for the suggestion. My question is that if I
remove the using statement, will it introduce a memory leak? Meaning
if I stop disposing the new config instance?
He replied:
That's what GC's are for!
(insert coughing and laughter here.)
We can quibble quite a bit. I'll argue that it is NOT a memory leak,
because the memory is well rooted and well known; constantly accessing
the Bitmap.Config.Rgb565 will return the previously created instance,
not constantly create new instances. There is no "leak," as such.
I'll instead argue that the instance, and the underlying GREF, is a
"tax"; it's "burnt", part of the cost of doing business. While it
would be "nice" to minimize these costs, it isn't practical to remove
all of them (e.g. we "lose" a GREF per class via .class_ref,
which is used to lookup method IDs...), at least not with the current
architecture.
(I also can't think of an alternate architecture that would result in
different costs/"taxes". While I do have some thoughts on allowing
things to be improved for some areas, they're not huge.)
I would suggest not worrying about Bitmap.Config.Rgb565 and similar
members too much, unless/until the profiler or GREF counts show
otherwise.

Why can I lock on any object type in C#?

Can someone explain, in detail, why it's possible to lock on objects of any type in C#?
I understand what lock is for and how to use it. I know how it expands to Monitor.Enter/Exit. What I'm looking for an explanation of the implementation detail and design considerations.
First of all: What is happening under the hood? For example: are there extra bits in an object instance (like there is for RTTI/vtable) that make it work? Or some kind of lookup table keyed on object references? (If so, how does this interact with the GC?) Or something else? Why don't I have to create an instance of a specific type to hold whatever the lock data is?
(And, by the way, what do Enter and Exit map to in native code?)
And, secondly, why is .NET designed to not have a specific type for taking out locks on? (Given that you generally just make a new object() for the purpose anyway - and most cases where you lock "any old object" are problematic.) Was this design choice forced by the implementation detail? Or was it deliberate? And, if deliberate, was it a good choice? (I realise this second part may require speculation.)
It is possible to lock on all non struct types. In layout of each refernce type on the heap there is special field (sync block) that is used to manage lock. The layout is covered in details in How CLR Creates Runtime Object. Excerpt from article is below:
The OBJECTREF does not point to the beginning of the Object Instance but at a DWORD offset (4 bytes). The DWORD is called Object Header and holds an index (a 1-based syncblk number) into a SyncTableEntry table.
Object layout on the heap:
sync block index
pointer to type
fields...
Speculation part: I believe that original guidance was to lock on whatever is convinient, but it was relatively quickly changed to having special "private object for lock" due to ease of getting external code to deadlock your methods. I think there even were classes in Framework that were impelented with locking on publically visible objects...

How to make the JIT extend stack variables to the end of scope (GC is too quick)

We're dealing with the GC being too quick in a .Net program.
Because we use a class with native resources and we do not call GC.KeepAlive(), the GC collects the object before the Native access ends. As a result the program crashes.
We have exactly the problem as described here:
Does the .NET garbage collector perform predictive analysis of code?
Like so:
{ var img = new ImageWithNativePtr();
IntPtr p = img.GetData();
// DANGER!
ProcessData(p);
}
The point is: The JIT generates information that shows the GC that img is not used at the point when GetData() runs. If a GC-Thread comes at the right time, it collects img and the program crashes. One can solve this by appending GC.KeepAlive(img);
Unfortunately there is already too much code written (at too many places) to rectify the issue easily.
Therefore: Is there for example an Attribute (i.e. for ImageWithNativePtr) to make the JIT behave like in a Debug build? In a Debug build, the variable img will remain valid until the end of the scope ( } ), while in Release it looses validity at the comment DANGER.
To the best of my knowledge there is no way to control jitter's behavior based on what types a method references. You might be able to attribute the method itself, but that won't fill your order. This being so, you should bite the bullet and rewrite the code. GC.KeepAlive is one option. Another is to make GetData return a safe handle which will contain a reference to the object, and have ProcessData accept the handle instead of IntPtr — that's good practice anyway. GC will then keep the safe handle around until the method returns. If most of your code has var instead of IntPtr as in your code fragment, you might even get away without modifying each method.
You have a few options.
(Requires work, more correct) - Implement IDisposable on your ImageWithNativePtr class as it compiles down to try { ... } finally { object.Dispose() }, which will keep the object alive provided you update your code with usings. You can ease the pain of doing this by installing something like CodeRush (even the free Xpress supports this) - which supports creating using blocks.
(Easier, not correct, more complicated build) - Use Post Sharp or Mono.Cecil and create your own attribute. Typically this attribute would cause GC.KeepAlive() to be inserted into these methods.
The CLR has nothing built in for this functionality.
I believe you can emulate what you want with a container that implements IDispose, and a using statement. The using statement allows for defining the scope and you can place anything you want in it that needs to be alive over that scope. This can be a helpful mechanism if you have no control over the implementation of ImageWithNativePtr.
The container of things to dispose is a useful idiom. Particularly when you really should be disposing of something ... which is probably the case with an image.
using(var keepAliveContainer = new KeepAliveContainer())
{
var img = new ImageWithNativePtr();
keepAliveContainer.Add(img);
IntPtr p = img.GetData();
ProcessData(p);
// anything added to the container is still referenced here.
}

How to create a memory leak in C# / .NET [duplicate]

This question already has answers here:
Closed 11 years ago.
Possible Duplicates:
Is it possible to have a memory leak in managed code? (specifically C# 3.0)
Memory Leak in C#
There was a similar question on this yesterday, but for Java, so I'm interested - what it takes to create a memory leak in C# / .NET (without using unsafe) ?
static events; DEADLY, since they never go out of scope.
static event EventHandler Evil;
for(int i = 0 ; i < 1000000 ; i++)
Evil += delegate {};
The anonymous method is simply a nice-to-have here but are nice because they also are a pig to unsubscribe unless you take a copy into a variable/field and subscribe that.
Technically this isn't actually "leaked", as you can still access them via Evil.GetInvocationList() - however, when used with regular objects this can cause unexpected object lifetimes, i.e.
MyHeavyObject obj = ...
...
SomeType.SomeStaticEvent += obj.SomeMethod;
now the object at obj lives forever. This satisfies enough of a perceived leak IMO, and "my app died a horrible death" is good enough for me ;p
When an object subscribes to an event, the object exposing the event maintains a reference to the subscriber (actually, the event, the MultiCastDelegate originally does, but it carries through). This reference will prevent the subscriber from being GC'd if the last reference to it (other than the one maintained by the event) goes out of scope.
The other two answers have this situation completely backward and are incorrect. This is a little tricky to show in a simple example and is typically seen in larger projects, but just remember that a reference to the subscriber is maintained by the MultiCastDelegate (event) and you should be able to think it through.
EDIT: As Marc mentions in his response you could technically get a reference to the "leaked" object via the GetInvocationList() method, but your code is unlikely to be using that and it won't matter when you crash with an OutOfMemoryExcetion.
Direct memory access is abstracted away from safe managed code. There has to be a call to unsafe code somewhere in order to induce leaking of memory, either in code you write or in a 3rd party resource (possibly within the FCL) with a memory leak bug.

Name for this pattern? (Answer: lazy initialization with double-checked locking)

Consider the following code:
public class Foo
{
private static object _lock = new object();
public void NameDoesNotMatter()
{
if( SomeDataDoesNotExist() )
{
lock(_lock)
{
if( SomeDataDoesNotExist() )
{
CreateSomeData();
}
else
{
// someone else also noticed the lack of data. We
// both contended for the lock. The other guy won
// and created the data, so we no longer need to.
// But once he got out of the lock, we got in.
// There's nothing left to do.
}
}
}
}
private bool SomeDataDoesNotExist()
{
// Note - this method must be thread-safe.
throw new NotImplementedException();
}
private bool CreateSomeData()
{
// Note - This shouldn't need to be thread-safe
throw new NotImplementedException();
}
}
First, there are some assumptions I need to state:
There is a good reason I couldn't just do this once an app startup. Maybe the data wasn't available yet, etc.
Foo may be instantiated and used concurrently from two or more threads. I want one of them to end up creating some data (but not both of them) then I'll allow both to access that same data (ignore thread safety of accessing the data)
The cost to SomeDataDoesNotExist() is not huge.
Now, this doesn't necessarily have to be confined to some data creation situation, but this was an example I could think of.
The part that I'm especially interested in identifying as a pattern is the check -> lock -> check. I've had to explain this pattern to developers on a few occasions who didn't get the algorithm at first glance but could then appreciate it.
Anyway, other people must do similarly. Is this a standardized pattern? What's it called?
Though I can see how you might think this looks like double-checked locking, what it actually looks like is dangerously broken and incorrect double-checked locking. Without an actual implementation of SomeDataDoesNotExist and CreateSomeData to critique we have no guarantee whatsoever that this thing is actually threadsafe on every processor.
For an example of an analysis of how double-checked locking can go wrong, check out this broken and incorrect version of double-checked locking:
C# manual lock/unlock
My advice: don't use any low-lock technique without a compelling reason and a code review from an expert on the memory model; you'll probably get it wrong. Most people do.
In particular, don't use double-checked locking unless you can describe exactly what memory access reorderings the processors can do on your behalf and provide a convincing argument that your solution is correct given any possible memory access reordering. The moment you step away even slightly from a known-to-be-correct implementation, you need to start the analysis over from scratch. You can't assume that just because one implementation of double-checked locking is correct, that they all are; almost none of them are correct.
Lazy initialization with double-checked locking?
The part that I'm especially interested in identifying as a pattern is the check -> lock -> check.
That is called double-checked locking.
Beware that in older Java versions (before Java 5) it is not safe because of how Java's memory model was defined. In Java 5 and newer changes were made to the specification of Java's memory model so that it is now safe.
The only name that comes to mind for this kind of is "Faulting". This name is used in iOS Core-Data framework to similar effect.
Basically, your method NameDoesNotMatter is a fault, and whenever someone invokes it, it results in the object to get populated or initialized.
See http://developer.apple.com/library/mac/#documentation/Cocoa/Conceptual/CoreData/Articles/cdFaultingUniquing.html for more details on how this design pattern is used.

Categories

Resources