Memory Leak in C# WPF - c#

I need to reduce the memory leak in c# WPF after disposing all the used objects. But I couldn't reduce the memory consumption completely by using the following code snippet.
Here is my code:
string str;
Uri uri;
private void Button_Click(object sender, RoutedEventArgs e) // "Load" Button
{
if(img.Source!=null)
Unload();
str = "F://Photos//Parthi//IMG_20141128_172826244.jpg"; // File Size: 0.643 MB
uri = new Uri(str);
img.Source = new BitmapImage(uri);
}
private void Button_Click_1(object sender, RoutedEventArgs e) //"Unload Button"
{
Unload();
}
private void Unload()
{
Bitmap bmp = GetBitmap(img.Source as BitmapSource);
bmp.Dispose();
bmp = null;
img.Source = null;
str = string.Empty;
uri = null;
}
private Bitmap GetBitmap(BitmapSource source)
{
Bitmap bmp = new Bitmap(source.PixelWidth, source.PixelHeight, System.Drawing.Imaging.PixelFormat.Format32bppPArgb);
BitmapData data = bmp.LockBits(new System.Drawing.Rectangle(System.Drawing.Point.Empty, bmp.Size), ImageLockMode.WriteOnly, System.Drawing.Imaging.PixelFormat.Format32bppPArgb);
source.CopyPixels(Int32Rect.Empty, data.Scan0, data.Height * data.Stride, data.Stride);
bmp.UnlockBits(data);
data = null;
source = null;
return bmp;
}
After Running the sample, While checking it in Task manager, the following memory consumption reading has produced,
Before "Load" Button is clicked : 10.0 MB
After "Load" Button is clicked : 47.8 MB
After "Unload" Button is clicked : 26.0 MB
After Unloading, I need to reduce the memory closely to 10.0 MB. So please help me regarding this.
Thanks in Advance.

Using the .net platform does not have control over the memory as if it works in c / c ++
The garbage collector operates very complex policies and the fact that you see down the memory to 26 but 10 is normal.
The .net reserves a space to recharge faster data and guarantee a free space in main memory without having to require continuous OS.
What I've noticed is perfectly normal.
It 'a question that in the past I personally subjected to a Microsoft MVP during a course
View this:
http://msdn.microsoft.com/en-us/library/ee787088%28v=vs.110%29.aspx
and this: Best Practice for Forcing Garbage Collection in C#

First of all, do not use Task Manager to view your memory usage, because it only shows the memory that the windows process allocates. There are plenty of much better tools out there and even the Performance Monitor, which comes on Windows, will give you a better idea of your application performance and any memory leaks. You may start it by running perfmon.exe.
In this sample application, GC does not become aggressive with collections until heap reaches approximately 85MB. And why would I want it to? It's not a lot of memory and it works very well as a caching solution, if I decide to use the same objects again.
So, I suggest taking a look at that tool. It gives a nice overview of what's going on and it's free.
Second of all, just because you called a .Dispose() to release those resources, does not mean that memory is released right away. You're basically making it eligible for the garbage collection and when GC gets to it, it'll take care of it.
The default garbage collection behavior for a WPF application is Concurrent(<4.0)/Background(4.0=<) Workstation GC. It runs on a special thread and most of the time tries to run concurrently to the rest of the application (I say most of the time, because once in a while it'll extremely briefly suspend other threads, so that it may complete its cleanup). It doesn't wait until you're out of memory, but at the same time, it does collection only when it doesn't affect the performance greatly -- sort of a balanced trade-off.
Another factor to consider is that you have a 0.643 MB jpeg file and once you count in BitmapImage, it's a bit more... well, anything above 85,000 bytes is considered to be a large object and thus, it is placed into generation 2, which contains the large object heap. Generation 2 garbage collection is expensive and is done infrequently. However, if you're running .NET 4.5.1, and I'm not sure if you are, you may force a compaction:
GCSettings.LargeObjectHeapCompactionMode = GCLargeObjectHeapCompactionMode.CompactOnce;
GC.Collection();
As with .Dipose(), this is not going happen right away, as it's an expensive process and it'll take a little longer than the generation 1 less-than-a-millisecond sweep. And honestly, you probably wouldn't even benefit from it, since I doubt that you would have a high fragmentation in the LOH with that application. I just mentioned it so that you're aware of that option if you ever need it.
So, why am I giving you a brief lesson on the default GC behavior of a WPF application (and most of .NET applications for that matter)? Well, it's important to understand the behavior of GC and acknowledge its existence. Unlike a C++ application, where you're granted a lot of control and freedom over your memory allocation, a .NET application utilizes a GC. The trade-off is that the memory is freed when it's freed by the GC. There may even be times when it frees it prematurely, from your point of view, and that's when you would explicitly keep an object alive by calling GC.KeepAlive(..). A lot of embedded systems do not make use of a GC and if you want very precise control over your memory, I suggest that you do not either.
If you want to be aware of how memory is being handled in a .NET application, I strongly recommend educating yourself on the inner workings of the garbage collector. What I've told you about is an incredibly brief snapshot of the default behavior and there is a lot more to it. There are a few modes, which provide different behaviors.

try to manage the usage of your variables. you can do something like this :
private void Button_Click(object sender, RoutedEventArgs e)
{
img.Source = new BitmapImage(new Uri(#"F://Photos//Parthi//IMG_20141128_172826244.jpg"));
}
if 'str' and 'uri' is not necessary needed by the whole class you can just add it directly it to your object. Or if you really need to store it and reuse your data that really has a huge amount of size you can just store it on Physical file on a temp file to prevent memory leakage; instead of storing it to memory you can store it on physical storage.
hope it helps

Your Unload method calls GetBitmap, but GetBitmap returns a new object every time, so as far as I can tell, you're never actually going to dispose of img.Source properly

Related

use of forcefully calling Garbage collection method [duplicate]

The general advice is that you should not call GC.Collect from your code, but what are the exceptions to this rule?
I can only think of a few very specific cases where it may make sense to force a garbage collection.
One example that springs to mind is a service, that wakes up at intervals, performs some task, and then sleeps for a long time. In this case, it may be a good idea to force a collect to prevent the soon-to-be-idle process from holding on to more memory than needed.
Are there any other cases where it is acceptable to call GC.Collect?
If you have good reason to believe that a significant set of objects - particularly those you suspect to be in generations 1 and 2 - are now eligible for garbage collection, and that now would be an appropriate time to collect in terms of the small performance hit.
A good example of this is if you've just closed a large form. You know that all the UI controls can now be garbage collected, and a very short pause as the form is closed probably won't be noticeable to the user.
UPDATE 2.7.2018
As of .NET 4.5 - there is GCLatencyMode.LowLatency and GCLatencyMode.SustainedLowLatency. When entering and leaving either of these modes, it is recommended that you force a full GC with GC.Collect(2, GCCollectionMode.Forced).
As of .NET 4.6 - there is the GC.TryStartNoGCRegion method (used to set the read-only value GCLatencyMode.NoGCRegion). This can itself, perform a full blocking garbage collection in an attempt to free enough memory, but given we are disallowing GC for a period, I would argue it is also a good idea to perform full GC before and after.
Source: Microsoft engineer Ben Watson's: Writing High-Performance .NET Code, 2nd Ed. 2018.
See:
https://msdn.microsoft.com/en-us/library/system.runtime.gclatencymode(v=vs.110).aspx
https://msdn.microsoft.com/en-us/library/dn906204(v=vs.110).aspx
I use GC.Collect only when writing crude performance/profiler test rigs; i.e. I have two (or more) blocks of code to test - something like:
GC.Collect(GC.MaxGeneration, GCCollectionMode.Forced);
TestA(); // may allocate lots of transient objects
GC.Collect(GC.MaxGeneration, GCCollectionMode.Forced);
TestB(); // may allocate lots of transient objects
GC.Collect(GC.MaxGeneration, GCCollectionMode.Forced);
...
So that TestA() and TestB() run with as similar state as possible - i.e. TestB() doesn't get hammered just because TestA left it very close to the tipping point.
A classic example would be a simple console exe (a Main method sort-enough to be posted here for example), that shows the difference between looped string concatenation and StringBuilder.
If I need something precise, then this would be two completely independent tests - but often this is enough if we just want to minimize (or normalize) the GC during the tests to get a rough feel for the behaviour.
During production code? I have yet to use it ;-p
The best practise is to not force a garbage collection in most cases. (Every system I have worked on that had forced garbage collections, had underlining problems that if solved would have removed the need to forced the garbage collection, and speeded the system up greatly.)
There are a few cases when you know more about memory usage then the garbage collector does. This is unlikely to be true in a multi user application, or a service that is responding to more then one request at a time.
However in some batch type processing you do know more then the GC. E.g. consider an application that.
Is given a list of file names on the command line
Processes a single file then write the result out to a results file.
While processing the file, creates a lot of interlinked objects that can not be collected until the processing of the file have complete (e.g. a parse tree)
Does not keep match state between the files it has processed.
You may be able to make a case (after careful) testing that you should force a full garbage collection after you have process each file.
Another cases is a service that wakes up every few minutes to process some items, and does not keep any state while it’s asleep. Then forcing a full collection just before going to sleep may be worthwhile.
The only time I would consider forcing
a collection is when I know that a lot
of object had been created recently
and very few objects are currently
referenced.
I would rather have a garbage collection API when I could give it hints about this type of thing without having to force a GC my self.
See also "Rico Mariani's Performance Tidbits"
These days I consider same of the above cases would be better to use a short lived worker process to do each batch of work and let the OS do the resource recovery.
One case is when you are trying to unit test code that uses WeakReference.
In large 24/7 or 24/6 systems -- systems that react to messages, RPC requests or that poll a database or process continuously -- it is useful to have a way to identify memory leaks. For this, I tend to add a mechanism to the application to temporarily suspend any processing and then perform full garbage collection. This puts the system into a quiescent state where the memory remaining is either legitimately long lived memory (caches, configuration, &c.) or else is 'leaked' (objects that are not expected or desired to be rooted but actually are).
Having this mechanism makes it a lot easier to profile memory usage as the reports will not be clouded with noise from active processing.
To be sure you get all of the garbage, you need to perform two collections:
GC.Collect();
GC.WaitForPendingFinalizers();
GC.Collect();
As the first collection will cause any objects with finalizers to be finalized (but not actually garbage collect these objects). The second GC will garbage collect these finalized objects.
You can call GC.Collect() when you know something about the nature of the app the garbage collector doesn't.
As the author, it's often tempting to think this is likely or normal. However, the truth is the GC amounts to a pretty well-written and tested expert system, and it's rare you'll know something about the low level code paths it doesn't.
The best example I can think of where you might have some extra information is an app that cycles between idle periods and very busy periods. You want the best performance possible for the busy periods and therefore want to use the idle time to do some clean up.
However, most of the time the GC is smart enough to do this anyway.
One instance where it is almost necessary to call GC.Collect() is when automating Microsoft Office through Interop. COM objects for Office don't like to automatically release and can result in the instances of the Office product taking up very large amounts of memory. I'm not sure if this is an issue or by design. There's lots of posts about this topic around the internet so I won't go into too much detail.
When programming using Interop, every single COM object should be manually released, usually though the use of Marshal.ReleseComObject(). In addition, calling Garbage Collection manually can help "clean up" a bit. Calling the following code when you're done with Interop objects seems to help quite a bit:
GC.Collect()
GC.WaitForPendingFinalizers()
GC.Collect()
In my personal experience, using a combination of ReleaseComObject and manually calling garbage collection greatly reduces the memory usage of Office products, specifically Excel.
As a memory fragmentation solution.
I was getting out of memory exceptions while writing a lot of data into a memory stream (reading from a network stream). The data was written in 8K chunks. After reaching 128M there was exception even though there was a lot of memory available (but it was fragmented). Calling GC.Collect() solved the issue. I was able to handle over 1G after the fix.
Have a look at this article by Rico Mariani. He gives two rules when to call GC.Collect (rule 1 is: "Don't"):
When to call GC.Collect()
I was doing some performance testing on array and list:
private static int count = 100000000;
private static List<int> GetSomeNumbers_List_int()
{
var lstNumbers = new List<int>();
for(var i = 1; i <= count; i++)
{
lstNumbers.Add(i);
}
return lstNumbers;
}
private static int[] GetSomeNumbers_Array()
{
var lstNumbers = new int[count];
for (var i = 1; i <= count; i++)
{
lstNumbers[i-1] = i + 1;
}
return lstNumbers;
}
private static int[] GetSomeNumbers_Enumerable_Range()
{
return Enumerable.Range(1, count).ToArray();
}
static void performance_100_Million()
{
var sw = new Stopwatch();
sw.Start();
var numbers1 = GetSomeNumbers_List_int();
sw.Stop();
//numbers1 = null;
//GC.Collect();
Console.WriteLine(String.Format("\"List<int>\" took {0} milliseconds", sw.ElapsedMilliseconds));
sw.Reset();
sw.Start();
var numbers2 = GetSomeNumbers_Array();
sw.Stop();
//numbers2 = null;
//GC.Collect();
Console.WriteLine(String.Format("\"int[]\" took {0} milliseconds", sw.ElapsedMilliseconds));
sw.Reset();
sw.Start();
//getting System.OutOfMemoryException in GetSomeNumbers_Enumerable_Range method
var numbers3 = GetSomeNumbers_Enumerable_Range();
sw.Stop();
//numbers3 = null;
//GC.Collect();
Console.WriteLine(String.Format("\"int[]\" Enumerable.Range took {0} milliseconds", sw.ElapsedMilliseconds));
}
and I got OutOfMemoryException in GetSomeNumbers_Enumerable_Range method the only workaround is to deallocate the memory by:
numbers = null;
GC.Collect();
You should try to avoid using GC.Collect() since its very expensive. Here is an example:
public void ClearFrame(ulong timeStamp)
{
if (RecordSet.Count <= 0) return;
if (Limit == false)
{
var seconds = (timeStamp - RecordSet[0].TimeStamp)/1000;
if (seconds <= _preFramesTime) return;
Limit = true;
do
{
RecordSet.Remove(RecordSet[0]);
} while (((timeStamp - RecordSet[0].TimeStamp) / 1000) > _preFramesTime);
}
else
{
RecordSet.Remove(RecordSet[0]);
}
GC.Collect(); // AVOID
}
TEST RESULT: CPU USAGE 12%
When you change to this:
public void ClearFrame(ulong timeStamp)
{
if (RecordSet.Count <= 0) return;
if (Limit == false)
{
var seconds = (timeStamp - RecordSet[0].TimeStamp)/1000;
if (seconds <= _preFramesTime) return;
Limit = true;
do
{
RecordSet[0].Dispose(); // Bitmap destroyed!
RecordSet.Remove(RecordSet[0]);
} while (((timeStamp - RecordSet[0].TimeStamp) / 1000) > _preFramesTime);
}
else
{
RecordSet[0].Dispose(); // Bitmap destroyed!
RecordSet.Remove(RecordSet[0]);
}
//GC.Collect();
}
TEST RESULT: CPU USAGE 2-3%
In your example, I think that calling GC.Collect isn't the issue, but rather there is a design issue.
If you are going to wake up at intervals, (set times) then your program should be crafted for a single execution (perform the task once) and then terminate. Then, you set the program up as a scheduled task to run at the scheduled intervals.
This way, you don't have to concern yourself with calling GC.Collect, (which you should rarely if ever, have to do).
That being said, Rico Mariani has a great blog post on this subject, which can be found here:
http://blogs.msdn.com/ricom/archive/2004/11/29/271829.aspx
One useful place to call GC.Collect() is in a unit test when you want to verify that you are not creating a memory leak (e. g. if you are doing something with WeakReferences or ConditionalWeakTable, dynamically generated code, etc).
For example, I have a few tests like:
WeakReference w = CodeThatShouldNotMemoryLeak();
Assert.IsTrue(w.IsAlive);
GC.Collect();
GC.WaitForPendingFinalizers();
Assert.IsFalse(w.IsAlive);
It could be argued that using WeakReferences is a problem in and of itself, but it seems that if you are creating a system that relies on such behavior then calling GC.Collect() is a good way to verify such code.
There are some situations where it is better safe than sorry.
Here is one situation.
It is possible to author an unmanaged DLL in C# using IL rewrites (because there are situations where this is necessary).
Now suppose, for example, the DLL creates an array of bytes at the class level - because many of the exported functions need access to such. What happens when the DLL is unloaded? Is the garbage collector automatically called at that point? I don't know, but being an unmanaged DLL it is entirely possible the GC isn't called. And it would be a big problem if it wasn't called. When the DLL is unloaded so too would be the garbage collector - so who is going to be responsible for collecting any possible garbage and how would they do it? Better to employ C#'s garbage collector. Have a cleanup function (available to the DLL client) where the class level variables are set to null and the garbage collector called.
Better safe than sorry.
The short answer is: never!
using(var stream = new MemoryStream())
{
bitmap.Save(stream, ImageFormat.Png);
techObject.Last().Image = Image.FromStream(stream);
bitmap.Dispose();
// Without this code, I had an OutOfMemory exception.
GC.Collect();
GC.WaitForPendingFinalizers();
//
}
Another reason is when you have a SerialPort opened on a USB COM port, and then the USB device is unplugged. Because the SerialPort was opened, the resource holds a reference to the previously connected port in the system's registry. The system's registry will then contain stale data, so the list of available ports will be wrong. Therefore the port must be closed.
Calling SerialPort.Close() on the port calls Dispose() on the object, but it remains in memory until garbage collection actually runs, causing the registry to remain stale until the garbage collector decides to release the resource.
From https://stackoverflow.com/a/58810699/8685342:
try
{
if (port != null)
port.Close(); //this will throw an exception if the port was unplugged
}
catch (Exception ex) //of type 'System.IO.IOException'
{
System.GC.Collect();
System.GC.WaitForPendingFinalizers();
}
port = null;
If you are creating a lot of new System.Drawing.Bitmap objects, the Garbage Collector doesn't clear them. Eventually GDI+ will think you are running out of memory and will throw a "The parameter is not valid" exception. Calling GC.Collect() every so often (not too often!) seems to resolve this issue.
i am still pretty unsure about this.
I am working since 7 years on an Application Server. Our bigger installations take use of 24 GB Ram. Its hightly Multithreaded, and ALL calls for GC.Collect() ran into really terrible performance issues.
Many third party Components used GC.Collect() when they thought it was clever to do this right now.
So a simple bunch of Excel-Reports blocked the App Server for all threads several times a minute.
We had to refactor all the 3rd Party Components in order to remove the GC.Collect() calls, and all worked fine after doing this.
But i am running Servers on Win32 as well, and here i started to take heavy use of GC.Collect() after getting a OutOfMemoryException.
But i am also pretty unsure about this, because i often noticed, when i get a OOM on 32 Bit, and i retry to run the same Operation again, without calling GC.Collect(), it just worked fine.
One thing i wonder is the OOM Exception itself...
If i would have written the .Net Framework, and i can't alloc a memory block, i would use GC.Collect(), defrag memory (??), try again, and if i still cant find a free memory block, then i would throw the OOM-Exception.
Or at least make this behavior as configurable option, due the drawbacks of the performance issue with GC.Collect.
Now i have lots of code like this in my app to "solve" the problem:
public static TResult ExecuteOOMAware<T1, T2, TResult>(Func<T1,T2 ,TResult> func, T1 a1, T2 a2)
{
int oomCounter = 0;
int maxOOMRetries = 10;
do
{
try
{
return func(a1, a2);
}
catch (OutOfMemoryException)
{
oomCounter++;
if (maxOOMRetries > 10)
{
throw;
}
else
{
Log.Info("OutOfMemory-Exception caught, Trying to fix. Counter: " + oomCounter.ToString());
System.Threading.Thread.Sleep(TimeSpan.FromSeconds(oomCounter * 10));
GC.Collect();
}
}
} while (oomCounter < maxOOMRetries);
// never gets hitted.
return default(TResult);
}
(Note that the Thread.Sleep() behavior is a really App apecific behavior, because we are running a ORM Caching Service, and the service takes some time to release all the cached objects, if RAM exceeds some predefined values. so it waits a few seconds the first time, and has increased waiting time each occurence of OOM.)
one good reason for calling GC is on small ARM computers with little memory, like the Raspberry PI (running with mono).
If unallocated memory fragments use too much of the system RAM, then the Linux OS can get unstable.
I have an application where I have to call GC every second (!) to get rid of memory overflow problems.
Another good solution is to dispose objects when they are no longer needed. Unfortunately this is not so easy in many cases.
This isn't that relevant to the question, but for XSLT transforms in .NET (XSLCompiledTranform) then you might have no choice. Another candidate is the MSHTML control.
If you are using a version of .net less than 4.5, manual collection may be inevitable (especially if you are dealing with many 'large objects').
this link describes why:
https://blogs.msdn.microsoft.com/dotnet/2011/10/03/large-object-heap-improvements-in-net-4-5/
Since there are Small object heap(SOH) and Large object heap(LOH)
We can call GC.Collect() to clear de-reference object in SOP, and move lived object to next generation.
In .net4.5, we can also compact LOH by using largeobjectheapcompactionmode

Is memory released on Form.close()?

I am working on feedback application which have lots of form open and close operation. I noticed few changes in memory changes in my application when i start my application it takes 25 MB. With each feedback user give it increases 3 MB in memory usage. On every form i had used this.close() when it jumps from one to other or there is any close operation. What can be the possible reason of memory increases.
Do i need to call garbage collector manually, as everyone says its not good practice.
In this i am using dual monitor scenario in which application take snapshot of secondary screen after each 500 ms and shows it on primary screen. For this I am using the code shown below:
public EntryForm()
{
sc = Screen.AllScreens;
dbDms = new HondaDb(UtilityFunctions.getServerConnection());
db = new HondaDb(UtilityFunctions.getClientConnection());
bmpScreenshot = new Bitmap(sc[1].Bounds.Width,
sc[1].Bounds.Height,
PixelFormat.Format32bppArgb);
Create a graphics object from the bitmap.
gfxScreenshot = Graphics.FromImage(bmpScreenshot);
Timer timerClientScreen = new Timer();
timerClientScreen.Interval = 500;
timerClientScreen.Enabled = false;
timerClientScreen.Start();
timerClientScreen.Tick += new EventHandler(timer_TickClient);
}
void timer_TickClient(object sender, EventArgs e)
{
// Take the screenshot from the upper left corner to the right bottom corner.
gfxScreenshot.CopyFromScreen(sc[1].Bounds.X, sc[1].Bounds.Y,
0, 0, sc[1].Bounds.Size, CopyPixelOperation.SourceCopy);
// Save the screenshot to the specified path that the user has chosen.
pictureBoxClient.Image = bmpScreenshot;
}
For closing of form on open of other I am using the code below
formOpen.show();
formClose.Close();
Suggest me how can I save memory usage.
It does, but just your UI objects. It isn't automatic for the variables you use. In an app like this, using big objects that take very little GC heap space but lots of unmanaged resources, the garbage collector doesn't typically run often enough to keep you out of trouble. You have to help and explicitly dispose the objects so you don't leave it up to the GC to take care of the job.
It may take too long for it to start running, you can build up a lot of unmanaged memory usage before it gets to run the finalizers. Potentially crashing your program with OOM, although you are still very far removed from that problem. Right now you are just running "heavy".
Add an event handler for the FormClosed event. You need to call the Dispose() method on the gfxScreenshot and bmpScreenshot objects. And surely those HondaDb objects need some kind of cleanup as well.
Do not assume that will instantly solve memory usage increments, the GC is not eager to release address space back to the operating system. Keeping it around instead with the assumption that you are likely going to have a need for it soon. The proper usage pattern is it stabilizing after a while at a reasonable number, then suddenly dropping and building back up. A saw-tooth pattern. Write a little unit test that calls creates and destroys your form object repeatedly, ensuring that it does the non-trivial jobs of taking the screenshot and accessing the dbase. Now you know with confidence that you don't have a run-away leak problem.
No, when you call Form.Close() you are just telling to close the form. the object is still there in memory and if you have a reference to it, it will be there until you have held that reference.
.NET has automatic garbage collection mechanism which collect objects that are garbage (you have no reference to them and they can not be accessed). So objects are removed from memory when they become garbage and .NET garbage collector starts it works. You can force executing garbage collector by calling GC.Collect().
More about GC
Have a look at this MSDN Thread. It's about dispoable Windows, this should release all ressource held by a an instance of a class. Then the garbage collector should do it's work.

Memory pressure informer/counter/statistics

I have an application that saves/caches many objects in a static field. When there are a lot of objects saved during the lifetime of the application, there is out of memory exception being thrown because the cache grows so large.
Is there any class or object that informs me that the memory is running out and there will be an outofmemoryexception throwing soon so that I can know that I need to free up some memory by removing some of these cached objects? I'm looking for a sign that there is memory pressure in the application so that I can take precautionary action during application runtime before the memory exception is thrown.
I also have an application that uses as much ram as it possibly can. You have a couple of options here depending on your specific scenario (all have drawbacks).
One of the simplest is to preallocate a circular buffer of classes (or bytes) that you are caching to consume all ram up front. This usually is not preferred because consuming all the RAM on a box when you don't need to is just plain Rude.
Another way to handle large caches (or keep from throwing out of memory exceptions) is to first check that the memory I need exists before allocation. This has the drawback of needing to serialize memory allocations. Otherwise the time between the available RAM check and the allocation you may run out of RAM.
Here is a sample of the best way I have found to do this in .NET:
//Wait for enough memory
var temp = new System.Diagnostics.PerformanceCounter("Memory", "Available MBytes");
long freeMemory = Convert.ToInt64(temp.NextValue()) * (long)1000000;
long neededMemory = (long)bytesToAllocate;
int attempts=1200; //two minutes
while (freeMemory < neededMemory)
{
//Signal that memory needs to be freed
Console.WriteLine("Waiting for enough free memory:. Free Memory:" + freeMemory + " Needed Memory(MB):" + neededMemory);
System.Threading.Thread.Sleep(100);
freeMemory = Convert.ToInt64(temp.NextValue()) * (long)1000000;
--attempts;
if (0 == attempts)
throw new OutOfMemoryException("Could not get enough free memory. File:" + Path.GetFileName(wavFileURL));
}
//Try and allocate the memory we need.
Furthermore once I am inside the while loop I signal that some memory needs to be freed (depending on your application). NOTE: I tried to simplify the code by using a Sleep statement but ultimately you would want some type of non polling operation if possible.
I am not sure about the specifics of your application but if you are multithreaded, or run many different executables then it may be better to serialize this memory check when allocated memory to the cache. If that is the case I use a Semaphore to make sure that only one thread and/or process can allocate RAM as needed. Similar to this:
Semaphore namedSemaphore = new Semaphore(1, 1, "MemoryAllocationSemaphore"); //named semaphores are cross process
if (bytesToAllocate > 2000000) //if we need less than 2MB then dont bother locking.
{
if (!namedSemaphore.WaitOne((int)TimeSpan.FromMinutes(15).TotalMilliseconds))
{
throw new TimeoutException("Waited over 15 minutes for aquiring memory allocation semaphore");
}
}
Then a little later on call this:
namedSemaphore.Release();
in a finally block to release the semaphore.
Another option is to use a multiple reader single writer lock. To have multiple threads pulling data from the cache (such as the ReaderWriterLock class). This has the advantage that while you are writing to can check memory pressure and cleanup then add more to the cache, yet still have multiple threads processing data. The drawback is of course the bottle neck of getting data into the cache comes from a single fixed point.
If you have so much data cached that you're running out of memory that indicates something seriously wrong with your caching approach. Even as a 32-bit process, you have a 2 GB address space.
You should consider using a limited cache where the user can set a preferred cache size and you automatically remove objects when this size is reached. You can implement certain caching strategies to pick which objects to remove from the cache: oldest objects or the least frequently used objects or a combination of these.
You can also try mapping some of your cache to disk and only keep the most frequently used objects in memory. When an object is needed that's not in memory, you can de-serialize it from disk and put it back in memory. If an object is then unused for a period of time, you can swap it to disk.

Memory problems in .NET

I have a C# service which listens to a queue for XML messages, receives them, processing them using XSLTs and writing them in the database.
It does process about 60K messages a day of about 1Mb each. The memory when is idle is going down to 100MB which is really good.
However recently I have started processing messages of 12 MB in size. It does blow the memory up and even when is idle it has memory of about 500MB. Any suggestions why this might be a problem? I don't think there is a memory leak as it would have surfaced after processing so many (60K messages of 1MB).
That looks perfectly fine. Why do you think this is a problem?
The garbage collector will eventually release the unused memory, but this doesn't mean this is going to happen as soon as your service is idle.
Raymond Chen has written a good article explaining the basic idea of garbage collection:
Everybody thinks about garbage collection the wrong way
However - but this is pure speculation with the information given in your questions - there might be memory leaks related to extension methods in your XSLT. Extension methods may lead to problems in case you recompile the stylesheet every time a new XML document is transformed. The fix is simple: Once compiled, cache the stylesheet.
SIR! PUT DOWN TASK MANAGER AND WALK AWAY. Seriously. Memory management in .NET is not tuned for smallest footprint. It is tuned for efficiency. It will hold onto memory rather than release it back to the system. Resist the temptation to babysit your memory unless there is an actual problem (OOM exceptions, system issues).
What measure are you using for memory? There are lots of possible measures, and none of them really mean "used memory". With virtual memory, sharing of images, etc. things are not simple.
even when is idle it has memory of about 500MB
Most processes (and I think this includes .NET) will not release memory back to the OS once allocated to the process. But if it is not in use it will be paged from physical memory allowing other processes to make use of it.
started processing messages of 12 MB in size
If an XML document is expanded into an object model (e.g. XmlDocument) it needs a lot more memory (lots of links to other nodes). Either look at a different model (both XDocument and XPathDocument are lighter weight) or, better, process the XML with XmlReader and thus never have a fully expanded object model.
First of all, add something to your system to allow you to manually invoke garbage collection for the purpose of investigation. The CLR will not necessarily return memory to the O/S unless it is in short supply. You should use this mechanism to manually invoke garbage collection when you have the system in a quiescent state so that you can see if the memory drops back down. (You should invoke GC.Collect(2) twice to ensure objects with finalizers are actually collected and not just finalized.)
If the memory goes down to the quiescent level after a full GC then you do not have a problem: simply that .NET is not being proactive at deallocating memory.
If the memory does not go down, then you have some kind of a leak. As your messages are large, it means their text representations are likely ending up on the Large Object Heap (LOH). This particular heap is not compacted which means that it is easier to leak this memory than with the other heaps.
One thing to watch out for is string interning: if you are manually interning strings this can cause LOH fragmentation.
Very hard to say what may be the problem. I've had success using Ants Memory Profiler when investigating memory issues.
Contrary to the thought I don't think there is a memory leak; I would profile the memory.
I would check your event handlers. If you're not careful detaching those when you're done, it seems like it's easy to create object references that won't be collected by GC.
I don't know that it's the greatest practice (since the onus of beginning and cancelling event handling should fall to the subscriber), but I've gone the route of implementing IDisposable and creating my events with an explicit delegate field instance. On disposal, the field can be set to null, which effectively detaches all subscriptions.
something like this:
public class Publisher
: IDisposable
{
private EventHandler _somethingHappened;
public event EventHandler SomethingHappened
{
add { _somethingHappened += value; }
remove { _somethingHappened -= value; }
}
protected void OnSomethingHappened(object sender, EventArgs e)
{
if (_somethingHappened != null)
_somethingHappened(sender, e);
}
public void Dispose()
{
_somethingHappened = null;
}
}
Barring that (I don't know how much control you have over the publisher), you might need to be careful to detach unneeded handlers on your subscriber:
public class Subscriber
: IDisposable
{
private Publisher _publisher;
public Publisher Publisher
{
get { return _publisher; }
set {
// Detach from the old reference
DetachEvents(_publisher);
_publisher = value;
// Attach to the new
AttachEvents(_publisher);
}
}
private void DetachEvents(Publisher publisher)
{
if (publisher != null)
{
publisher.SomethingHappened -= new EventHandler(publisher_SomethingHappened);
}
}
private void AttachEvents(Publisher publisher)
{
if (publisher != null)
{
publisher.SomethingHappened += new EventHandler(publisher_SomethingHappened);
}
}
void publisher_SomethingHappened(object sender, EventArgs e)
{
// DO STUFF
}
public void Dispose()
{
// Detach from reference
DetachEvents(Publisher);
}
}

How do I get .NET to garbage collect aggressively?

I have an application that is used in image processing, and I find myself typically allocating arrays in the 4000x4000 ushort size, as well as the occasional float and the like. Currently, the .NET framework tends to crash in this app apparently randomly, almost always with an out of memory error. 32mb is not a huge declaration, but if .NET is fragmenting memory, then it's very possible that such large continuous allocations aren't behaving as expected.
Is there a way to tell the garbage collector to be more aggressive, or to defrag memory (if that's the problem)? I realize that there's the GC.Collect and GC.WaitForPendingFinalizers calls, and I've sprinkled them pretty liberally through my code, but I'm still getting the errors. It may be because I'm calling dll routines that use native code a lot, but I'm not sure. I've gone over that C++ code, and make sure that any memory I declare I delete, but still I get these C# crashes, so I'm pretty sure it's not there. I wonder if the C++ calls could be interfering with the GC, making it leave behind memory because it once interacted with a native call-- is that possible? If so, can I turn that functionality off?
EDIT: Here is some very specific code that will cause the crash. According to this SO question, I do not need to be disposing of the BitmapSource objects here. Here is the naive version, no GC.Collects in it. It generally crashes on iteration 4 to 10 of the undo procedure. This code replaces the constructor in a blank WPF project, since I'm using WPF. I do the wackiness with the bitmapsource because of the limitations I explained in my answer to #dthorpe below as well as the requirements listed in this SO question.
public partial class Window1 : Window {
public Window1() {
InitializeComponent();
//Attempts to create an OOM crash
//to do so, mimic minute croppings of an 'image' (ushort array), and then undoing the crops
int theRows = 4000, currRows;
int theColumns = 4000, currCols;
int theMaxChange = 30;
int i;
List<ushort[]> theList = new List<ushort[]>();//the list of images in the undo/redo stack
byte[] displayBuffer = null;//the buffer used as a bitmap source
BitmapSource theSource = null;
for (i = 0; i < theMaxChange; i++) {
currRows = theRows - i;
currCols = theColumns - i;
theList.Add(new ushort[(theRows - i) * (theColumns - i)]);
displayBuffer = new byte[theList[i].Length];
theSource = BitmapSource.Create(currCols, currRows,
96, 96, PixelFormats.Gray8, null, displayBuffer,
(currCols * PixelFormats.Gray8.BitsPerPixel + 7) / 8);
System.Console.WriteLine("Got to change " + i.ToString());
System.Threading.Thread.Sleep(100);
}
//should get here. If not, then theMaxChange is too large.
//Now, go back up the undo stack.
for (i = theMaxChange - 1; i >= 0; i--) {
displayBuffer = new byte[theList[i].Length];
theSource = BitmapSource.Create((theColumns - i), (theRows - i),
96, 96, PixelFormats.Gray8, null, displayBuffer,
((theColumns - i) * PixelFormats.Gray8.BitsPerPixel + 7) / 8);
System.Console.WriteLine("Got to undo change " + i.ToString());
System.Threading.Thread.Sleep(100);
}
}
}
Now, if I'm explicit in calling the garbage collector, I have to wrap the entire code in an outer loop to cause the OOM crash. For me, this tends to happen around x = 50 or so:
public partial class Window1 : Window {
public Window1() {
InitializeComponent();
//Attempts to create an OOM crash
//to do so, mimic minute croppings of an 'image' (ushort array), and then undoing the crops
for (int x = 0; x < 1000; x++){
int theRows = 4000, currRows;
int theColumns = 4000, currCols;
int theMaxChange = 30;
int i;
List<ushort[]> theList = new List<ushort[]>();//the list of images in the undo/redo stack
byte[] displayBuffer = null;//the buffer used as a bitmap source
BitmapSource theSource = null;
for (i = 0; i < theMaxChange; i++) {
currRows = theRows - i;
currCols = theColumns - i;
theList.Add(new ushort[(theRows - i) * (theColumns - i)]);
displayBuffer = new byte[theList[i].Length];
theSource = BitmapSource.Create(currCols, currRows,
96, 96, PixelFormats.Gray8, null, displayBuffer,
(currCols * PixelFormats.Gray8.BitsPerPixel + 7) / 8);
}
//should get here. If not, then theMaxChange is too large.
//Now, go back up the undo stack.
for (i = theMaxChange - 1; i >= 0; i--) {
displayBuffer = new byte[theList[i].Length];
theSource = BitmapSource.Create((theColumns - i), (theRows - i),
96, 96, PixelFormats.Gray8, null, displayBuffer,
((theColumns - i) * PixelFormats.Gray8.BitsPerPixel + 7) / 8);
GC.WaitForPendingFinalizers();//force gc to collect, because we're in scenario 2, lots of large random changes
GC.Collect();
}
System.Console.WriteLine("Got to changelist " + x.ToString());
System.Threading.Thread.Sleep(100);
}
}
}
If I'm mishandling memory in either scenario, if there's something I should spot with a profiler, let me know. That's a pretty simple routine there.
Unfortunately, it looks like #Kevin's answer is right-- this is a bug in .NET and how .NET handles objects larger than 85k. This situation strikes me as exceedingly strange; could Powerpoint be rewritten in .NET with this kind of limitation, or any of the other Office suite applications? 85k does not seem to me to be a whole lot of space, and I'd also think that any program that uses so-called 'large' allocations frequently would become unstable within a matter of days to weeks when using .NET.
EDIT: It looks like Kevin is right, this is a limitation of .NET's GC. For those who don't want to follow the entire thread, .NET has four GC heaps: gen0, gen1, gen2, and LOH (Large Object Heap). Everything that's 85k or smaller goes on one of the first three heaps, depending on creation time (moved from gen0 to gen1 to gen2, etc). Objects larger than 85k get placed on the LOH. The LOH is never compacted, so eventually, allocations of the type I'm doing will eventually cause an OOM error as objects get scattered about that memory space. We've found that moving to .NET 4.0 does help the problem somewhat, delaying the exception, but not preventing it. To be honest, this feels a bit like the 640k barrier-- 85k ought to be enough for any user application (to paraphrase this video of a discussion of the GC in .NET). For the record, Java does not exhibit this behavior with its GC.
Here are some articles detailing problems with the Large Object Heap. It sounds like what you might be running into.
http://connect.microsoft.com/VisualStudio/feedback/details/521147/large-object-heap-fragmentation-causes-outofmemoryexception
Dangers of the large object heap:
http://www.simple-talk.com/dotnet/.net-framework/the-dangers-of-the-large-object-heap/
Here is a link on how to collect data on the Large Object Heap (LOH):
http://msdn.microsoft.com/en-us/magazine/cc534993.aspx
According to this, it seems there is no way to compact the LOH. I can't find anything newer that explicitly says how to do it, and so it seems that it hasn't changed in the 2.0 runtime:
http://blogs.msdn.com/maoni/archive/2006/04/18/large-object-heap.aspx
The simple way of handling the issue is to make small objects if at all possible. Your other option to is to create only a few large objects and reuse them over and over. Not an idea situation, but it might be better than re-writing the object structure. Since you did say that the created objects (arrays) are of different sizes, it might be difficult, but it could keep the application from crashing.
Start by narrowing down where the problem lies. If you have a native memory leak, poking the GC is not going to do anything for you.
Run up perfmon and look at the .NET heap size and Private Bytes counters. If the heap size remains fairly constant but private bytes is growing then you've got a native code issue and you'll need to break out the C++ tools to debug it.
Assuming the problem is with the .NET heap you should run a profiler against the code like Redgate's Ant profiler or JetBrain's DotTrace. This will tell you which objects are taking up the space and not being collected quickly. You can also use WinDbg with SOS for this but it's a fiddly interface (powerful though).
Once you've found the offending items it should be more obvious how to deal with them. Some of the sort of things that cause problems are static fields referencing objects, event handlers not being unregistered, objects living long enough to get into Gen2 but then dying shortly after, etc etc. Without a profile of the memory heap you won't be able to pinpoint the answer.
Whatever you do though, "liberally sprinkling" GC.Collect calls is almost always the wrong way to try and solve the problem.
There is an outside chance that switching to the server version of the GC would improve things (just a property in the config file) - the default workstation version is geared towards keeping a UI responsive so will effectively give up with large, long running colections.
Use Process Explorer (from Sysinternals) to see what the Large Object Heap for your application is. Your best bet is going to be making your arrays smaller but having more of them. If you can avoid allocating your objects on the LOH then you won't get the OutOfMemoryExceptions and you won't have to call GC.Collect manually either.
The LOH doesn't get compacted and only allocates new objects at the end of it, meaning that you can run out of space quite quickly.
If you're allocating a large amount of memory in an unmanaged library (i.e. memory that the GC isn't aware of), then you can make the GC aware of it with the GC.AddMemoryPressure method.
Of course this depends somewhat on what the unmanaged code is doing. You haven't specifically stated that it's allocating memory, but I get the impression that it is. If so, then this is exactly what that method was designed for. Then again, if the unmanaged library is allocating a lot of memory then it's also possible that it's fragmenting the memory, which is completely beyond the GC's control even with AddMemoryPressure. Hopefully that's not the case; if it is, you'll probably have to refactor the library or change the way in which it's used.
P.S. Don't forget to call GC.RemoveMemoryPressure when you finally free the unmanaged memory.
(P.P.S. Some of the other answers are probably right, this is a lot more likely to simply be a memory leak in your code; especially if it's image processing, I'd wager that you're not correctly disposing of your IDIsposable instances. But just in case those answers don't lead you anywhere, this is another route you could take.)
Just an aside: The .NET garbage collector performs a "quick" GC when a function returns to its caller. This will dispose the local vars declared in the function.
If you structure your code such that you have one large function that allocates large blocks over and over in a loop, assigning each new block to the same local var, the GC may not kick in to reclaim the unreferenced blocks for some time.
If on the other hand, you structure your code such that you have an outer function with a loop that calls an inner function, and the memory is allocated and assigned to a local var in that inner function, the GC should kick in immediately when the inner function returns to the caller and reclaim the large memory block that was just allocated, because it's a local var in a function that is returning.
Avoid the temptation to mess with GC.Collect explicitly.
Apart from handling the allocations in a more GC-friendly way (e.g. reusing arrays etc.), there's a new option now: you can manually cause compaction of the LOH.
GCSettings.LargeObjectHeapCompactionMode = GCLargeObjectHeapCompactionMode.CompactOnce;
This will cause a LOH compaction the next time a gen-2 collection happens (either on its own, or by your explicit call of GC.Collect).
Do note that not compacting the LOH is usually a good idea - it's just that your scenario is a decent enough case for allowing for manual compaction. The LOH is usually used for huge, long-living objects - like pre-allocated buffers that are reused over time etc.
If your .NET version doesn't support this yet, you can also try to allocate in sizes of powers of two, rather than allocating precisely the amount of memory you need. This is what a lot of native allocators do to ensure memory fragmentation doesn't get impossibly stupid (it basically puts an upper limit on the maximum heap fragmentation). It's annoying, but if you can limit the code that handles this to a small portion of your code, it's a decent workaround.
Do note that you still have to make sure it's actually possible to compact the heap - any pinned memory will prevent compaction in the heap it lives in.
Another useful option is to use paging - never allocating more than, say, 64 kiB of contiguous space on the heap; this means you'll avoid using the LOH entirely. It's not too hard to manage this in a simple "array-wrapper" in your case. The key is to maintain a good balance between performance requirements and reasonable abstraction.
And of course, as a last resort, you can always use unsafe code. This gives you a lot of flexibility in handling memory allocations (though it's a bit more painful than using e.g. C++) - including allowing you to explicitly allocate unmanaged memory, do your work with that and release the memory manually. Again, this only makes sense if you can isolate this code to a small portion of your total codebase - and make sure you've got a safe managed wrapper for the memory, including the appropriate finalizer (to maintain some decent level of memory safety). It's not too hard in C#, though if you find yourself doing this too often, it might be a good idea to use C++/CLI for those parts of the code, and call them from your C# code.
Have you tested for memory leaks? I've been using .NET Memory Profiler with quite a bit of success on a project that had a number of very subtle and annoyingly persistent (pun intended) memory leaks.
Just as a sanity check, ensure that you're calling Dispose on any objects that implement IDisposable.
You could implement your own array class which breaks the memory into non-contiguious blocks. Say, have a 64 by 64 array of [64,64] ushort arrays which are allocated and deallocated seperately. Then just map to the right one. Location 66,66 would be at location [2,2] in the array at [1,1].
Then, you should be able to dodge the Large Object Heap.
The problem is most likely due to the number of these large objects you have in memory. Fragmentation would be a more likely issue if they are variable sizes (while it could still be an issue.) You stated in the comments that you are storing an undo stack in memory for the image files. If you move this to Disk you would save yourself tons of application memory space.
Also moving the undo to disk should not cause too much of a negative impact on performance because it's not something you will be using all of the time. (If it does become a bottle neck you can always create a hybrid disk/memory cache system.)
Extended...
If you are truly concerned about the possible impact of performance caused by storing undo data on the file system, you may consider that the virtual memory system has a good chance of paging this data to your virtual page file anyway. If you create your own page file/swap space for these undo files, you will have the advantage of being able to control when and where the disk I/O is called. Don't forget, even though we all wish our computers had infinite resources they are very limited.
1.5GB (useable application memory space) / 32MB (large memory request size) ~= 46
you can use this method:
public static void FlushMemory()
{
Process prs = Process.GetCurrentProcess();
prs.MinWorkingSet = (IntPtr)(300000);
}
three way to use this method.
1 - after dispose managed object such as class ,....
2 - create timer with such 2000 intervals.
3 - create thread to call this method.
i suggest to you use this method in thread or timer.
The best way to do it is like this article show, it is in spanish, but you sure understand the code.
http://www.nerdcoder.com/c-net-forzar-liberacion-de-memoria-de-nuestras-aplicaciones/
Here the code in case link get brock
using System.Runtime.InteropServices;
....
public class anyname
{
....
[DllImport("kernel32.dll", EntryPoint = "SetProcessWorkingSetSize", ExactSpelling = true, CharSet = CharSet.Ansi, SetLastError = true)]
private static extern int SetProcessWorkingSetSize(IntPtr process, int minimumWorkingSetSize, int maximumWorkingSetSize);
public static void alzheimer()
{
GC.Collect();
GC.WaitForPendingFinalizers();
SetProcessWorkingSetSize(System.Diagnostics.Process.GetCurrentProcess().Handle, -1, -1);
}
....
you call alzheimer() to clean/release memory.
The GC doesn't take into account the unmanaged heap. If you are creating lots of objects that are merely wrappers in C# to larger unmanaged memory then your memory is being devoured but the GC can't make rational decisions based on this as it only see the managed heap.
You end up in a situation where the GC doesn't think you are short of memory because most of the things on your gen 1 heap are 8 byte references where in actual fact they are like icebergs at sea. Most of the memory is below!
You can make use of these GC calls:
System::GC::AddMemoryPressure(sizeOfField);
System::GC::RemoveMemoryPressure(sizeOfField);
These methods allow the GC to see the unmanaged memory (if you provide it the right figures).

Categories

Resources