Sometimes when I am trying to create blurred bitmap I am getting "Null Pointer Exception".
Happens in this block of code (I recently started catching the exception so at least it doesn't crash the app):
try
{
using (Bitmap.Config config = Bitmap.Config.Rgb565) {
return Bitmap.CreateBitmap (blurredBitmap, width, height, config);
}
}
catch (Java.Lang.Exception exception)
{
Util.Log(exception.ToString());
}
Please refer to these pictures for more details about parameters I am passing into the "CreateBitmap" method:
Here is the expanded parameters:
Full exception:
exception {Java.Lang.NullPointerException: Exception of type
'Java.Lang.NullPointerException' was thrown. at
System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw ()
[0x0000b] in
/Users/builder/data/lanes/2058/58099c53/source/mono/mcs/class/corlib/System.Runtime.ExceptionServices/ExceptionDispatchInfo.cs:61
at Android.Runtime.JNIEnv.CallStaticObjectMethod (IntPtr jclass,
IntPtr jmethod, Android.Runtime.JValue* parms) [0x00064] in
/Users/builder/data/lanes/2058/58099c53/source/monodroid/src/Mono.Android/src/Runtime/JNIEnv.g.cs:1301
at Android.Graphics.Bitmap.CreateBitmap (System.Int32[] colors, Int32
width, Int32 height, Android.Graphics.Config config) [0x00088] in
/Users/builder/data/lanes/2058/58099c53/source/monodroid/src/Mono.Android/platforms/android-22/src/generated/Android.Graphics.Bitmap.cs:735
at Psonar.Apps.Droid.PayPerPlay.StackBlur.GetBlurredBitmap
(Android.Graphics.Bitmap original, Int32 radius) [0x00375] in
d:\Dev\psonar\Source\Psonar.Apps\Psonar.Apps.Droid\Psonar.Apps.Droid.PayPerPlay\Utilities\StackBlur.cs:123
--- End of managed exception stack trace --- java.lang.NullPointerException at
android.graphics.Bitmap.createBitmap(Bitmap.java:687) at
android.graphics.Bitmap.createBitmap(Bitmap.java:707) at
dalvik.system.NativeStart.run(Native Method)
} Java.Lang.NullPointerException
Not sure if this could be a bug in Xamarin or the passed parameters are wrong.
I got a reply from one of the members of the Xamarin team - Jonathan Pryor:
The NullPointerException is coming from Java code:
at android.graphics.Bitmap.createBitmap(Bitmap.java:687)
at android.graphics.Bitmap.createBitmap(Bitmap.java:707)
at dalvik.system.NativeStart.run(Native Method)
Looking quickly at a variety of releases, Jelly Bean might fit:
https://github.com/android/platform_frameworks_base/blob/jb-release/graphics/java/android/graphics/Bitmap.java#L687
return nativeCreate(colors, offset, stride, width, height,
config.nativeInt, false);
A quick glance at the surrounding method body shows that config
isn't checked for null-ness, so if null is passed this would result
in a NullPointerException.
The problem, though, is that you're not passing null:
using (Bitmap.Config config = Bitmap.Config.Rgb565) {
return Bitmap.CreateBitmap (blurredBitmap, width, height, config);
}
...or are you?
I would suggest that you remove the using block:
return Bitmap.CreateBitmap (blurredBitmap, width, height,
Bitmap.Config.Rgb565);
Here's what I think may be happening, but first, a digression:
Deep in the core of Xamarin.Android there is a mapping between Java
objects and their corresponding C# wrapper objects. Constructor
invocation, Java.Lang.Object.GetObject(), etc. will create
mappings; Java.lang.Object.Dispose() will remove mappings.
A core part of this is Object Identity: when a Java instance is
exposed to C# code and a C# wrapper is created, the same C# wrapper
instance should continue to be reused for that Java instance.
An implicit consequence of this is that any instance is effectively
global, because if multiple code paths/threads/etc. obtain a JNI handle to the same Java instance, they'll get the same C# wrapper.
Which brings us back to my hypothesis, and your code block:
Bitmap.Config is a Java enum, meaning each member is a Java object.
Furthermore, they're global values, so every thread has access to
those members, meaning the C# Bitmap.Config.Rgb565 instance is
effectively a global variable.
A global variable that you're Dispose()ing.
Which is "fine", in that the next time Bitmap.Config.Rgb565 is
access, a new wrapper will be created.
The problem, though, is if you have multiple threads accessing
Bitmap.Config.Rgb565 at the ~same time, each of which is trying to
Dispose() of an instance. At which point it's entirely plausible that
the two threads may reference the same wrapper instance, and the
Dispose() from one thread will thus INVALIDATE the instance used by
the other thread.
Which would result in null being passed to the Bitmap.createBitmap()
call, which is precisely what you're observing.
Please try removing the using block, and see if that helps matters.
The whole thread is accessible here.
Then I asked:
Jonathan Pryor - cheers for the suggestion. My question is that if I
remove the using statement, will it introduce a memory leak? Meaning
if I stop disposing the new config instance?
He replied:
That's what GC's are for!
(insert coughing and laughter here.)
We can quibble quite a bit. I'll argue that it is NOT a memory leak,
because the memory is well rooted and well known; constantly accessing
the Bitmap.Config.Rgb565 will return the previously created instance,
not constantly create new instances. There is no "leak," as such.
I'll instead argue that the instance, and the underlying GREF, is a
"tax"; it's "burnt", part of the cost of doing business. While it
would be "nice" to minimize these costs, it isn't practical to remove
all of them (e.g. we "lose" a GREF per class via .class_ref,
which is used to lookup method IDs...), at least not with the current
architecture.
(I also can't think of an alternate architecture that would result in
different costs/"taxes". While I do have some thoughts on allowing
things to be improved for some areas, they're not huge.)
I would suggest not worrying about Bitmap.Config.Rgb565 and similar
members too much, unless/until the profiler or GREF counts show
otherwise.
Related
I got a question how Monitor.Enter works. I investigated .net framework source code, and it shows this only:
[System.Security.SecurityCritical] // auto-generated
[ResourceExposure(ResourceScope.None)]
[MethodImplAttribute(MethodImplOptions.InternalCall)]
private static extern void ReliableEnter(Object obj, ref bool lockTaken);
I guess Monitor.Enter implementation is platform dependent, so I browsed Mono source code and I gave up :(
Yes, a critical section assigned for each System.Object instance may solve, but, I don't think the actual Monitor.Lock is written like this, because creating a critical section for each System.Object will cost unlimitedly. (Win32 does not allow billions of critical section objects in a process!)
Does anybody know how Monitor.Enter works? Please reply. Thanks in advance.
Every object in .NET has two extra (hidden—you can't see them) overhead members:
A "type object pointer". This is just a reference to the Type instance of the object. In fact, you can "access" this by calling GetType().
A "sync block index". This is a native WORD size integral type which is an index into the CLR's internal array of "Sync Blocks".
This is how it keeps track of which objects are locked.
The Sync Block structure contains a field that can be marked for locking. Basically, when you lock an object, this field is switched on. When the lock is released, it is switched off (basically - I haven't looked at the SSCLI for long enough to delve deeper into how that sort of operation works - I believe it is based on EnterCriticalSection though..).
The MethodImplOptions.InternalCall arguments you've passed to the attribute above means that the actual implementation of that method resides in the CLR.. which is why you can't browse any further through the code.
Looking at the Mono source code, it seems that they create a Semaphore (using CreateSemaphore or a similar platform-specific function) when the object is first locked, and store it in the object. There also appears to be some object pooling going on with the semaphores and their associated MonoThreadsSync structures.
The relevant function is static inline gint32 mono_monitor_try_enter_internal (MonoObject *obj, guint32 ms, gboolean allow_interruption) in the file mono/metadata/monitor.c, in case you're interested.
I expect that Microsoft .Net does something similar.
Microsoft .NET - whenever possible - tries a spinlock on the thin lock structure in the object header. (Notice how one can "lock" on any object.)
Only if there is a need for it will an event handle be used from the pool or a new one allocated.
If class x calls y by going y.create(new z) does the z obj get created in x's stack as well as y's? This is assuming we are passing by value, not ref or ptrs
A couple of things:
The stack/heap is on the process (application) level, not at an object level. The entire application shares one stack (at least in the context of your question), no matter how many objects it is using.
Unless the "z" in your example is a value type (like a struct), it won't ever fully reside on the stack. If "z" is a class, then it "lives" on the heap, with only a reference to it on the stack.
You really should read this short explanation from Jon Skeet, especially "A worked example" towards the bottom.
The object z is created in the applications allocated memory. Each function does not have its own memory area when using new to create objects.
I would suggest that you read up on the content on this page, I certainly find it useful from time to time (ie when i get things mixed up)
I'm working on a software communicating with external device. The device requires a set of initialization values (calibrationData). Those calibration data differ from piece to piece of this equipment. In first versions the calibrationData can be selected by user and thus the user may by accident load calibrationData obtained on different piece. The device would work, but will measure incorrectly.
I have
public Instrument(CalibrationData calibration)
{
_camera = new Camera();
_driver = new Driver();
if (_camera.GetUniqueId() != calibration.GetCameraUniqueId())
throw new WrongCalibrationException("Calibration file was obtained on different equipment.");
//Don't write anything here. Exception has to be the last code in the constructor.
}
and then somewhere else
try
{
instrument = new Instrument(calibration);
}
catch (WrongCalibrationException e)
{
MessageBox.Show("You tried to load calibration obtained on different device.");
}
I'm not able to check the ID before I'm connected to the device.
This question comprises out of two in fact.
Is my solution correct? I want to test usage of proper calibration automatically and not rely on the programmer using use my code to call another method (Something like Instrument.AreYouProperlyCalibrated())
Is the object constructed properly when the exception is thrown at the end of constructor? I'm a bit afraid that C# is doing some mumbo jumbo after the construcor finishes and that this might be different in case the ctor threw an exception.
Thanks
The instance already fully exists before the constructor begins (indeed, you can even completely bypass all constructors and still get a valid instance) - it just means that any initialization code that didn't execute won't have executed.
For example, while it isn't a good idea, you can pass the object instance out of the type during the constructor, i.e.
_camera.HereIsMe(this);
or
SomeExternalObject.Track(this);
so nothing too terrible will happen, since as far as the runtime is concerned this object exists like normal, and must be handled properly. However, in some cases it is cleaner to use a factory:
public static YourType Create(args) {
// TODO: perform enough work to validate
return new YourType(validated args);
}
But to reiterate; if there is a problem, then throwing from the constructor is not unexpected and is not harmful.
It's a matter of preference. For example, DateTime throws an exception in its constructor. If you'd rather not, you could use a static method like Build(Calibration calibration). A good practice is to use XML comments to let users of your type know that the constructor throws an exception in an <exception> tag.
You can throw exceptions from wherever you want in your code.
If your constructor has a throw somewhere, and that throw occurs, the object won't be created or, to be more correct, it will be created but your code execution flow will follow the exception, so you won't be in the code branch where the object was being created, so it's like it was not created at all for what concerns you.
So I'd say your approach, considering only the code you posted, is ok. Obviously there could be other problems related to things that could be in the Camera and Driver constructors (stuff not disposed, etc) but that's another matter.
it's a contentious subject, but at the end of the day it is perfectly valid to throw exceptions in constructors. here are some links that discuss and validate the practice:
Throwing ArgumentNullException in constructor?
http://bytes.com/topic/c-sharp/answers/518251-throwing-exception-constructor
http://blog.aggregatedintelligence.com/2009/04/can-constructors-throw-exceptions.html
I would like to add to Marc's answer by pointing out that the Camera and Driver objects should be getting injected into the class. See this (one of MANY) article(s) on implementing Dependency Injection in C#
Hopefully I won't get flogged for this being an opinion. ;)
We're dealing with the GC being too quick in a .Net program.
Because we use a class with native resources and we do not call GC.KeepAlive(), the GC collects the object before the Native access ends. As a result the program crashes.
We have exactly the problem as described here:
Does the .NET garbage collector perform predictive analysis of code?
Like so:
{ var img = new ImageWithNativePtr();
IntPtr p = img.GetData();
// DANGER!
ProcessData(p);
}
The point is: The JIT generates information that shows the GC that img is not used at the point when GetData() runs. If a GC-Thread comes at the right time, it collects img and the program crashes. One can solve this by appending GC.KeepAlive(img);
Unfortunately there is already too much code written (at too many places) to rectify the issue easily.
Therefore: Is there for example an Attribute (i.e. for ImageWithNativePtr) to make the JIT behave like in a Debug build? In a Debug build, the variable img will remain valid until the end of the scope ( } ), while in Release it looses validity at the comment DANGER.
To the best of my knowledge there is no way to control jitter's behavior based on what types a method references. You might be able to attribute the method itself, but that won't fill your order. This being so, you should bite the bullet and rewrite the code. GC.KeepAlive is one option. Another is to make GetData return a safe handle which will contain a reference to the object, and have ProcessData accept the handle instead of IntPtr — that's good practice anyway. GC will then keep the safe handle around until the method returns. If most of your code has var instead of IntPtr as in your code fragment, you might even get away without modifying each method.
You have a few options.
(Requires work, more correct) - Implement IDisposable on your ImageWithNativePtr class as it compiles down to try { ... } finally { object.Dispose() }, which will keep the object alive provided you update your code with usings. You can ease the pain of doing this by installing something like CodeRush (even the free Xpress supports this) - which supports creating using blocks.
(Easier, not correct, more complicated build) - Use Post Sharp or Mono.Cecil and create your own attribute. Typically this attribute would cause GC.KeepAlive() to be inserted into these methods.
The CLR has nothing built in for this functionality.
I believe you can emulate what you want with a container that implements IDispose, and a using statement. The using statement allows for defining the scope and you can place anything you want in it that needs to be alive over that scope. This can be a helpful mechanism if you have no control over the implementation of ImageWithNativePtr.
The container of things to dispose is a useful idiom. Particularly when you really should be disposing of something ... which is probably the case with an image.
using(var keepAliveContainer = new KeepAliveContainer())
{
var img = new ImageWithNativePtr();
keepAliveContainer.Add(img);
IntPtr p = img.GetData();
ProcessData(p);
// anything added to the container is still referenced here.
}
Could anyone create a short sample that breaks, unless the [ReliabilityContract(Consistency.WillNotCorruptState, Cer.Success)] is applied?
I just ran through this sample on MSDN and am unable to get it to break, even if I comment out the ReliabilityContract attribute. Finally seems to always get called.
using System;
using System.Runtime.CompilerServices;
using System.Runtime.ConstrainedExecution;
class Program {
static bool cerWorked;
static void Main( string[] args ) {
try {
cerWorked = true;
MyFn();
}
catch( OutOfMemoryException ) {
Console.WriteLine( cerWorked );
}
Console.ReadLine();
}
unsafe struct Big {
public fixed byte Bytes[int.MaxValue];
}
//results depends on the existance of this attribute
[ReliabilityContract( Consistency.WillNotCorruptState, Cer.Success )]
unsafe static void StackOverflow() {
Big big;
big.Bytes[ int.MaxValue - 1 ] = 1;
}
static void MyFn() {
RuntimeHelpers.PrepareConstrainedRegions();
try {
cerWorked = false;
}
finally {
StackOverflow();
}
}
}
When MyFn is jitted, it tries to create a ConstrainedRegion from the finally block.
In the case without the ReliabilityContract, no proper ConstrainedRegion could be formed, so a regular code is emitted. The stack overflow exception is thrown on the call to Stackoverflow (after the try block is executed).
In the case with the ReliabilityContract, a ConstrainedRegion could be formed and the stack requirements of methods in the finally block could be lifted into MyFn. The stack overflow exception is now thrown on the call to MyFn (before the try block is ever executed).
The primary driver for this functionality was to support SQL Servers stringent requirements for integrating the CLR into SQL Server 2005. Probably so that others could use and likely for legal reasons this deep integration was published as a hosting API but the technical requirements were SQL Servers. Remember that in SQL Server, MTBF is measured in months not hours and the process restarting because an unhandled exception happened is completely unacceptable.
This MSDN Magazine article is probably the best one that I've seen describing the technical requirements the constrained execution environment was built for.
The ReliabilityContract is used to decorate your methods to indicate how they operate in terms of potentially asynchronous exceptions (ThreadAbortException, OutOfMemoryException, StackOverflowException). A constrained execution region is defined as a catch or finally (or fault) section of a try block which is immediately preceded by a call to System.Runtime.CompilerServices.RuntimeServices.PrepareConstrainedRegions().
System.Runtime.CompilerServices.RuntimeServices.PrepareConstrainedRegions();
try
{
// this is not constrained
}
catch (Exception e)
{
// this IS a CER
}
finally
{
// this IS ALSO a CER
}
When a ReliabilityContract method is used from within a CER, there are 2 things that happen to it. The method will be pre-prepared by the JIT so that it won't invoke the JIT compiler the first time it's executed which could try to use memory itself and cause it's own exceptions. Also while inside of a CER the runtime promises not to throw a ThreadAbort exception and will wait to throw the exception until after the CER has completed.
So back to your question; I'm still trying to come up with a simple code sample that will directly answer your question. As you may have already guessed though, the simplest sample is going to require quite a lot of code given the asynchronous nature of the problem and will likely be SQLCLR code because that is the environment which will use CERs for the most benefit.
Are you running the MSDN sample under the debugger? I do not think it is possible for CER to function when you are executing within the debugger, as the debugger itself changes the nature of execution anyway.
If you build and run the app in optimized release mode, you should be able to see it fail.
While I don't have a concrete example for you, I think you're missing the point of have a try..finally block inside of the methods that guarantee success. The whole point of saying that the method will always succeed means that regards of what (exception) happens during execution, steps will be taken to ensure the data being accessed will be in a valid state when the method returns. Without the try..finally, you wouldn't be ensuring anything, and could mean that only half of the operations you wanted to happen, would happen. Thus, Cer.Success doesn't actually guarantee success, it only states that you as the developer are guaranteeing success.
Check out this page for an explanation of the differences between Success and MayFail states as it pertains to an Array.CopyTo method: http://weblogs.asp.net/justin_rogers/archive/2004/10/05/238275.aspx
CER attributes are means of documentation. They do influence how CLR will execute code in some situations, but I believe they (or lack of them) will never result in error in current versions of .NET.
They are mostly 'reserved for future use'.