I am having fun creating my own wallpaper changer program. I know there are plenty on the internet, but i am simply trying to learn new stuff. So, till now, every time i was creating any simple program, i didn't care about RAM/Memory cuz i was mostly creating programs for school, and it was like one time use program, and then i was forgetting about it.
But now i am trying to create application i would like to use, something mine. I noticed my program takes around ~4000k in "alt + ctrl + del" window, and it takes sometimes up to 200,000k when it changes wallpaper, and sometimes goes down, and sometimes stays that high till it changes it to another one.
So here comes the question, what are tips to make my app use least possible ram while running (tray icon, and main windows is hidden using if (FormWindowState.Minimized == WindowState) Hide();)
Is variable inside a function taking any memory? Example
int function(int a){
int b = 0;
int c = a+b;
return c;
}
Or are these variables released after function returns some value?
I could use some tips, guides, and/or links to articles where i could get some info about that. Newbie friendly tho.
EDIT:
Okay, i have read some, started to dispose bitmaps, got rid of one of my global variables i was using.. and its on steady 4000-7000k now. Raising a little when changing wallpaper, but then lowering back to that. So i guess thats kind of success for me. One more thing left. I downloaded kinda big/large/with many options program, that changes wallpapers, and it got a loot more options than mine, and still it takes around 1000-2000k, so ima read now what can take so "much" ram in mine. Right when i run my program its about 4100, so i guess i still can do something to optimize that. Thanks everyone for answers tho! :)
Memory from your program's perspective is divided in two blocks if you will. The Stack and the Heap.
The Stack represents the current frame of execution (for instance the currently executing function), and it is used to pass function parameters, return values and is where local variables are generally stored. That memory is purged when the the current frame of execution ends (for example your function exiting).
The Heap represents a memory pool where objects can be created and stored for longer periods of time. Generally, anything created using the "new" operator will go on the Heap, with the references existing on the Stack (for local context). If references to the allocated object stop being used, that memory remains taken until the Garbage Collector runs at some unspecified time in the future and frees the memory. When the GC runs can not be guaranteed - it might be when your program is running out of memory, or at scheduled intervals etc.
I think in the memory behaviour you are observing, spikes are due to opening up and loading resources, troughs are after the GC runs. Another way to observe this is to look at a program's memory footprint when there is UI showing on the screen, and when the program is minimized. When minimized the memory footprint will shrink, because all the graphical elements are no longer necessary. When you maximize the UI and redraw it, memory usage peaks.
You can look at the following articles for a better understanding of Stack and Heap:
C# Stack and Heap
What are stack and heap?
You might also want to look into Garbage Collection:
Garbage collection article on MSDN
... and Value vs Reference types
Make sure you use Using blocks around anything that implements the iDisposable interface. Particularly if you are reading files, any streams, or any requests. You can read a bit more about it at http://msdn.microsoft.com/en-us/library/yh598w02(v=vs.80).aspx and it gives a few examples of how to use it.
Memory taken for locally declared variables will be automatically released.
Memory taken for variables that will persist outside the function will be released too, when they are no longer used, by something called the GarbageCollector (GC for short).
So don't worry, you're not creating a memory leak with your example function.
It's difficult to tell you where that 200,000l could be used up. There are profilers which can help (I have none to recommend, but this one comes first on Google: http://memprofiler.com/)
Related
I'm writing a game in C# with SharpDX...
So, for drawing a frame, currently i'm using a Timer, with interval set to 1;
But this is too slow for a game, so i want something faster...
First of all, if I use another thread for the game (and another for the Window itself) and i use a "[while (InGame)]" statement, it is fast, but the GC will never release a single object, and the memory usage just going up and up...
IF I use the SharpDX built in "RenderLoop" function, it is fast also, but the GC still doesn't do anything...
I've tried to override the Window form "WndProc" function, but the game only refresh itself then i'm moving the mouse like a crazy men...
This is my first real game, with a lot of functions,
so if any of you have any idea, how would i fix it, or what should I use, I would appreciate it...
There must be something in the Window Form lifecycle like a
void FormLife
{
WndProc();
HandleEvents();
etc...
}
(It was just a hard example...)
OR how can i force the GC to do its job, at the end of the while(InGame) state ?
I've tried GC.Collect(); but its dont work...
Don't put your game logic in the UI classes.
Use a multimedia timer and a dispatcher to return to main thread for rendering. From my experience a cycle of 5mSec works well. (Lower will waste CPU time).
Use dispatcher priority of Input so that the timer won't choke the UI and prevent it handling user events.
On timer event, if the previous update hasn't completed, return immediately without doing anything (you can use System.Threading.Interlocked.CompareAndExchange as a cheap sync method).
To make game run faster, try to reuse objects and change their states or to pass short living immutables so that the GC won't delay your program.
Use the IDisposable pattern where applicable.
If GC.Collect does not free memory you have a memory leak. Use the memory profiling tool of your choice to spot the leak (PerfView, Windbg, .NET Memory Profiler, YourKit, ....)
Besides this the best allocation is no allocation at all. If you reuse objects and keep unused objects in a pool you can get rid of GCs almost entirely. Even if you stay away from Gen 2 collections which can block your game for hundreds of ms you also need to keep a sharp eye on Gen 0/1 collections which can also cause noticeable delays.
See http://social.msdn.microsoft.com/Forums/vstudio/en-US/74631e98-fd8b-4098-8306-8d1031e912a4/gc-still-pauses-whole-app-under-sustainedlowlatency-latencymode-despite-it-consumes-025gb-and?forum=clr
You don't typically have to worry about memory management. If you are making sure not to hold on to references then the garbage collector should do its thing automatically based on memory pressure.
You can force it yourself using GC.Collect but you need to be absolutely certain you really need to. Most of the time its not required. Check this thread for more info and points to consider: Best Practice for Forcing Garbage Collection in C#
Since the objects are not being collected you must be holding a reference to them somewhere.
You didn't post much code so it is hard to say where you are holding a reference.
In the past I've used windbg to find the root cause of of memory issues. VS2013 has a memory profiler built in (might be easier to use).
A common case I've seen of holding references is via event subscription. If you have subscribed to any events you have to make sure you unsubscribe to release the object. see Why and How to avoid Event Handler memory leaks? for more details. That doesn't look like your problem.
I've implemented Dispose...everywhere that supports it. I am removing all event handlers. I'm not calling native code.
I even iterate through every dictionary and set the values to null and call .Clear() on all the items.
Question:
How can I figure out where my code is leaking?
I first discovered the leak by running it in test overnight. It uses a fixed amount of memory so it should grow and then become somewhat static. I then made the foreground thread display memory like this:
if (key.KeyChar == 'g')
{
long pre = GC.GetTotalMemory(false);
long post = GC.GetTotalMemory(true);
Console.WriteLine("{2} Pre:{0} \tPost:{1}", pre, post, System.DateTime.Now);
GC.Collect();
}
I ran this several times (over several hours, pressing "g" once in a while) and saw an ever-increasing number.
The best way to track this down is to use a memory profiler...There are lots to choose from.
.NET Memory Profiling Tools
Make sure you use
try
{
}
finally
{
youDisposableObject.Dispose();
}
or
using (yourDisposableObject) {}
to every object you implemented "Dispose"
if you implemented finalizers to some of the objects without any need, remove them
if you still can't fix it after that, you will have to use a memory profiler
There's an article that describes how to do it using SOS.dll here and a more comprehensive one here.
Depending on the version of Visual Studio you're using (Premium or Ultimate), you can also use the normal Code Analysis tools to help find issues in your code that can lead to memory leaks. (Details here)
Of course, in managed code, memory leaks are a bit different than they are in unmanaged code. In unmanaged code, where you allocate and deallocate memory explicity, memory leaks are caused by failing to deallocate the memory.
In .NET, a memory leak comes from hanging onto an object longer than intended. This can largely be avoided simply by following best practices of using the using statement where possible, and by carefully planning the scope of your variables.
I just had the same problem in my c# application and used (actually the trial Version) of
dotTrace memory profiler which was very helpful.
It still took some time to localize the actual lines of code where the leak occurred. So do not expect that tool to immediately identify the culprit right away.
I am working on a relatively large solution in Visual Studio 2010. It has various projects, one of them being an XNA Game-project, and another one being an ASP.NET MVC 2-project.
With both projects I am facing the same issue: After starting them in debug mode, memory usage keeps rising. They start at 40 and 100MB memory usage respectively, but both climb to 1.5GB relatively quickly (10 and 30 minutes respectively). After that it would sometimes drop back down to close to the initial usage, and other times it would just throw OutOfMemoryExceptions.
Of course this would indicate severe memory leaks, so that is where I initially tried to spot the problem. After searching for leaks unsuccesfully, I tried calling GC.Collect() regularly (about once per 10 seconds). After introducing this "hack", memory usage stayed at 45 and 120MB respectively for 24 hours (until I stopped testing).
.NET's garbage collection is supposed to be "very good", but I can't help suspecting that it just doesn't do its job. I have used CLR Profiler in an attempt to troubleshoot the issue, and it showed that the XNA project seemed to have saved a lot of byte arrays I was indeed using, but to which the references should already be deleted, and therefore collected by the garbage collector.
Again, when I call GC.Collect() regularly, the memory usage issues seem to have gone. Does anyone know what could be causing this high memory usage? Is it possibly related to running in Debug mode?
After searching for leaks unsuccesfully
Try harder =)
Memory leaks in a managed language can be tricky to track down. I have had good experiences with the Redgate ANTS Memory Profiler. It's not free, but they give you a 14 day, full-featured trial. It has a nice UI and shows you where you memory is allocated and why these objects are being kept in memory.
As Alex says, event handlers are a very common source of memory leaks in a .NET app. Consider this:
public static class SomeStaticClass
{
public event EventHandler SomeEvent;
}
private class Foo
{
public Foo()
{
SomeStaticClass.SomeEvent += MyHandler;
}
private void MyHandler( object sender, EventArgs ) { /* whatever */ }
}
I used a static class to make the problem as obvious as possible here. Let's say that, during the life of your application, many Foo objects are created. Each Foo subscribes to the SomeEvent event of the static class.
The Foo objects may fall out of scope at one time or another, but the static class maintains a reference to each one via the event handler delegate. Thus, they are kept alive indefinitely. In this case, the event handler simply needs to be "unhooked".
...the XNA project seemed to have saved a lot of byte arrays I was indeed using...
You may be running into fragmentation in the LOH. If you are allocating large objects very frequently they may be causing the problem. The total size of these objects may be much smaller than the total memory allocated to the runtime, but due to fragmentation there is a lot of unused memory allocated to your application.
The profiler I linked to above will tell you if this is a problem. If it is, you will likely be able to track it down to an object leak somewhere. I just fixed a problem in my app showing the same behavior and it was due to a MemoryStream not releasing its internal byte[] even after calling Dispose() on it. Wrapping the stream in a dummy stream and nulling it out fixed the problem.
Also, stating the obvious, make sure to Dispose() of your objects that implement IDisposable. There may be native resources lying around. Again, a good profiler will catch this.
My suggestion; it's not the GC, the problem is in your app. Use a profiler, get your app in a high memory consumption state, take a memory snapshot and start analyzing.
First and foremost, the GC works, and works well. There's no bug in it that you have just discovered.
Now that we've gotten that out of the way, some thoughts:
Are you using too many threads?
Remember that GC is non deterministic; it'll run whenever it thinks it needs to run (even if you call GC.Collect().
Are you sure all your references are going out of scope?
What are you loading into memory in the first place? Large images? Large text files?
Your profiler should tell you what's using so much memory. Start cutting at the biggest culprits as much as you can.
Also, calling GC.Collect() every X seconds is a bad idea and will unlikely solve your real problem.
Analyzing memory issues in .NET is not a trivial task and you definitely should read several good articles and try different tools to achieve the result. I end up with the following article after investigations: http://www.alexatnet.com/content/net-memory-management-and-garbage-collector You can also try to read some of Jeffrey Richter's articles, like this one: http://msdn.microsoft.com/en-us/magazine/bb985010.aspx
From my experience, there are two most common reasons for Out-Of-Memory issue:
Event handlers - they may hold the object even when no other objects referencing it. So ideally you need to unsubscribe event handlers to destroy the object automatically.
Finalizer thread is blocked by some other thread in STA mode. For example, when STA thread does a lot of work the other threads are stopped and the objects that are in the finalization queue cannot be destroyed.
Edit: Added link to Large Object Heap fragmentation.
Edit: Since it looks like it is a problem with allocating and throwing away the Textures, can you use Texture2D.SetData to reuse the large byte[]s?
First, you need to figure out whether it is managed or unmanaged memory that is leaking.
Use perfmon to see what happens to your process '.net memory# Bytes in all Heaps' and Process\Private Bytes. Compare the numbers and the memory rises. If the rise in Private bytes outpaces the rise in heap memory, then it's unmanaged memory growth.
Unmanaged memory growth would point to objects that are not being disposed (but eventually collected when their finalizer executes).
If it's managed memory growth, then we'll need to see which generation/LOH (there are also performance counters for each generation of heap bytes).
If it's Large Object Heap bytes, you'll want to reconsider the use and throwing away of large byte arrays. Perhaps the byte arrays can be re-used instead of discarded. Also, consider allocating large byte arrays that are powers of 2. This way, when disposed, you'll leave a large "hole" in the large object heap that can be filled by another object of the same size.
A final concern is pinned memory, but I don't have any advice for you on this because I haven't ever messed with it.
I would also add that if you are doing any file access, make sure that you are closing and/or disposing of any Readers or Writers. You should have a matching 1-1 between opening any file and closing it.
Also, I usually use the using clause for resources, like a Sql Connection:
using (var connection = new SqlConnection())
{
// Do sql connection work in here.
}
Are you implementing IDisposable on any objects and possibly doing something custom that is causing any issues? I would double check all of your IDisposable code.
The GC doesn't take into account the unmanaged heap. If you are creating lots of objects that are merely wrappers in C# to larger unmanaged memory then your memory is being devoured but the GC can't make rational decisions based on this as it only see the managed heap.
You end up in a situation where the GC collector doesn't think you are short of memory because most of the things on your gen 1 heap are 8 byte references where in actual fact they are like icebergs at sea. Most of the memory is below!
You can make use of these GC calls:
System::GC::AddMemoryPressure(sizeOfField);
System::GC::RemoveMemoryPressure(sizeOfField);
These methods allow the garbage collector to see the unmanaged memory (if you provide it the right figures)
I am writing a C# app for a device with limited ram. (Mono on iPhone/iPad)
When I assign a large string:
string xml = "10 meg xml string from REST service";
then clear it
xml = null;
Is with the GC free up that memory asap? Is there a way to make sure it's cleaned up. GC has a collect feature, but is this executed right away?
The problem is that I am downloading many large xml files in a loop and even though I am setting the string to null, memory use is growing.
In general GC does not happen immediately because garbage collection is relatively expensive. The runtime generally tries to do it while it's not busy doing other things, but this clearly isn't always possible. I ran into a non-deterministic out of memory error at one point because it was putting off GC too long (sometimes) while I was running a tight memory intensive loop. Lesson: Generally the collector knows what it is doing and you don't need to tweak it. But not always.
To force a collection to happen, you need two lines:
GC.Collect();
GC.WaitForPendingFinalizers();
EDIT: Wrote this before I saw your note on what you were running on. This is written based on the desktop .NET machine. It may (or may not) be different on mono / iPad.
First, keeping large strings or large arrays in memory at once should be avoided, especially on memory constrained devices like phones. Use XmlTextReader, for example, to parse xml files. If you get them from the network, save them to disk, etc.
Next, the issue about garbage collection: the current Mono GC does a conservative scan of the thread stacks, which means that some pointers to objects may be still be visible to the GC, even if to the programmer they have been cleared (like setting to null in your example).
To limit the consequences of this behavior, you should try to allocate or otherwise manipulate big arrays and strings in a separate stack frame. For example, instead of coding it this way:
while (true) {
string s = get_big_string_from_network ();
do_something_with_string(s);
handle_ui ();
s = null;
}
do the following:
void manipulate_big_string() {
string s = get_big_string_from_network ();
do_something_with_string(s);
}
...
while (true) {
manipulate_big_string ();
handle_ui ();
}
Normally, setting a reference to null has the intended effect only when applied to a static or instance field, using it with a method local variable may not be enough to hide the reference from the GC.
I think if you are developing for iPhone you don't have the gargabe collector from the .net framework. The memory management is made by the operating system in this case the iOS.
I think you should check in mono documentation in order to find how to manage the memory in this case. XCode implemented and automatic object management called automatic reference counting and is not a garbage collection like the one in .net framework is just an automatic tool to release unused objects.
Now thinking just in .net when working with big strings, you should always use stringbuilder instead of just string.
Now thinking in the iOS you sould not compare the app written for the iOs enviroment with a Desktop application for windows (in a pc you have a lot more resources). The iOS will not allow a big hit in memory consumption, if an app do this, the operating system will close it automatically to mantain the system running.
While I'm not a Mono expert, but some simple things to check. While you noted that you are setting your variable to null, are you actually calling a .Close() or .Dispose() method as appropriate, or including the scope of the variable in question inside of a using block?
Could be a case of a problem where you're sticking around waiting for a finalizer (i.e. if you have a handle on an unmalaged resource like a file handle), or the variable is still stuck in scope for some reason. This would result in increased memory pressure.
Ideally, handle opening your file one at a time in a method with explicitly clear variable scope in combination with a using block, thus ensuring that the appropriate finalizers, etc. are called even if an exception is thrown.
Hope that helps!
My program, alas, has a memory leak somewhere, but I'll be damned if I know what it is.
Its job is to read in a bunch of ~2MB files, do some parsing and string replacement, then output them in various formats. Naturally, this means a lot of strings, and so doing memory tracing shows that I have a lot of strings, which is exactly what I'd expect. The structure of the program is a series of classes (each in their own thread, because I'm an idiot) that acts on an object that represents each file in memory. (Each object has an input queue that uses a lock on both ends. While this means I get to run this simple processing in parallel, it also means I have multiple 2MB objects sitting in memory.) Each object's structure is defined by a schema object.
My processing classes raise events when they've done their processing and pass a reference to the large object that holds all my strings to add it to the next processing object's queue. Replacing the event with a function call to add to the queue does not stop the leak. One of the output formats requires me to use an unmanaged object. Implementing Dispose() on the class does not stop the leak. I've replaced all the references to the schema object with an index name. No dice. I got no idea what's causing it, and no idea where to look. The memory trace doesn't help because all I see are a bunch of strings being created, and I don't see where the references are sticking in memory.
We're pretty much going to give up and roll back at this point, but I have a pathological need to know exactly how I messed this up. I know Stack Overflow can't exactly comb my code, but what strategies can you suggest for tracking this leak down? I'm probably going to do this in my own time, so any approach is viable.
One technique I would try is to systematically reduce the amount of code you need to demonstrate the problem without making the problem go away. This is informally known as "divide and conquer" and is a powerful debugging technique. Once you have a small example that demonstrates the same problem, it will be much easier for you to understand. Perhaps the memory problem will become clearer at that point.
There is only one person who can help you. That person's name is Tess Ferrandez. (hushed silence)
But seriously. read her blog (the first article is pretty pertinent). Seeing how she debugs this stuff will give you a lot of deep insight into knowing what's going on with your problem.
I like the CLR Profiler from Microsoft. It provides some great tools for visualizing the managed heap and tracking down leaks.
I use the dotTrace profiler for tracking down memory leaks. It's a lot more deterministic than methodological trial and error and turns up results a lot faster.
For any actions that the system performs, I take a snapshot then run a few iterations of the function, then take another snapshot. Comparing the two will show you all the objects that were created in between but were not freed. You can then see the stack frame at the point of their creation, and therefore work out what instances are not being freed.
Get this: http://www.red-gate.com/Products/ants_profiler/index.htm
The memory and performance profiling are awesome. Being able to actually see proper numbers instead of guessing makes optimisation pretty fast. I've used it quite a bit at work for reducing the memory footprint of our main app.
Add code to the constructor of the
unamanaged object to log when it's
onstructed, and sort a unique ID.
Use that unique ID when the object
is destroyed again, and you can at
least tell which ones are going
astray.
Grep the code for every place you
construct a new object; follow that
code path to see if you have a
matching destroy.
Add chaining pointers to the
constructed objects, so you have a
link to the object constructed
before and after the current one. Then you can sweep through them later.
Add reference counters.
Is there a "debug malloc" available?
The managed debugging add in SoS (Son of Strike) is immensely poweful for tracking down managed memory 'leaks' since they are, by definition discoverable from the gc roots.
It will work in WinDbg or Visual studio (though it is in many respects easier to use in WinDbg)
It is not at all easy to get to grips with. Here is a tutorial
I would second the recommendation to check out Tess Fernandez's blog too.
How do you know for a fact that you actually have a memory leak?
One other thing: You write that your processing classes are using events. If you have registered an event handler it will keep the object that owns the event alive - i.e. the GC cannot collect it. Make sure you de-register all event handlers if you want your objects to be garbage collected.
Be careful how you define "leak". "Uses more memory" or even "uses too much memory" is not the same as "memory leak". This is especially true in a garbage-collected environment. It may simply be that GC hasn't needed to collect the extra memory you're seeing used. Also be careful about the difference between virtual memory use and physical memory use.
Finally not all "memory leaks" are caused by "memory" sorts of issues. I was once told (not asked) to fix an urgent memory leak that was causing IIS to restart frequently. In fact, I did profiling and found I was using a lot of strings through the StringBuilder class. I implemented an object pool (from an MSDN article) for the StringBuilders, and memory usage went down substantially.
IIS still restarted just as frequently. This was because there was no memory leak. Instead, there was unmanaged code that claimed to be thread-safe but was not. Using it in a web service (multiple threads) caused it to write all over the C Runtime Library heap. Since nobody was looking for unmanaged exceptions, nobody saw this until I happened to do some profiling with AQtime from Automated QA. It happens to have an events window, that happened to display the cries of pain from the C Runtime Library.
Placed locks around the calls to the unmanaged code, and the "memory leak" went away.
If your unmanaged object really is the cause of the leak, you may want to have it call AddMemoryPressure when it allocates unmanaged memory and RemoveMemoryPressure in Finalize/Dispose/where ever it deallocates the unmanaged memory. This will give the GC a better handle on the situation, because it may not realize there's a need to schedule collection otherwise.
You mentioned that your using events. Are you removing the handlers from those events when your done with your object? I've found that 'loose' event handlers will cause a lot of memory leak problems if you add a bunch of handlers without removing them when your done.
The best memory profiling tool for .Net is this:
http://memprofiler.com
Also, while I'm here, the best performance profiler for .Net is this:
http://www.yourkit.com/dotnet/download/index.jsp
They are also great value for money, have low overhead and are easy to use. Anyone serious about .Net development should consider both of these a personal investment and purchase immediately. Both of them have a free trial.
I work on a real time game engine with over 700k lines of code written in C# and have spent hundreds of hours using both these tools. I have used the Sci Tech product since 2002 and YourKit! for the last three years. Although I've tried quite a few of the others I have always returned to these.
IMHO, they are both absolutely brilliant.
Similar to Charlie Martin, you can do something like this:
static unigned __int64 _foo_id = 0;
foo::foo()
{
++_foo_id;
if (_foo_id == MAGIC_BAD_ALLOC_ID)
DebugBreak();
std::werr << L"foo::foo # " << _foo_id << std::endl;
}
foo::~foo()
{
--_foo_id;
std::werr << L"foo::~foo # " << _foo_id << std::endl;
}
If you can recreate it, even once or twice with the same allocation id, this will let you look at what is happening right then and there (obviously TLS/threading has to be handled as well, if needed, but I left it out for clarity).