Limiting the allowed RAM for a service, possible using MaxWorkingSet - c#

I have a service that runs on a domain controller that is randomly accessed by other computers on the network. I can't shutdown the service and run it only when needed (this would defeat the purpose of running it as a service anyway).
The problem is that the memory used by the service doesn't seem to ever get cleared, and increases every time the service is queried by a remote computer.
Is there a way to set a limit on the RAM used by the application?
I've found a few references to using MaxWorkingSet, but none of the references actually tell me how to use it. Can I use MaxWorkingSet to limit the RAM used to, for example, 35MB? and if so, how? (what is the syntax etc?)
Otherwise, is there a function like "clearall()" that I could use to reset the variables and memory at the end of each run through? I've tried using GC.Collect(), but it didn't work.

Literally, MaxWorkingSet only affect Working set, which is the amount of physical memory. To restrict of an overall memory usage, you need Job Object API. But it is danger if your program really need such memory (many codes don't consider an OutOfMemoryException and sometimes .NET runtime has strange behaviors when memory is not enough)
You need to:
Create a Win32 Job object
Set the maximum memory to the job
Assign your process to the job
Here is a wrapper for .NET. ^reference
Besides, you could try this method of GC: (for .NET 4.6 or newer)
GCSettings.LargeObjectHeapCompactionMode = GCLargeObjectHeapCompactionMode.CompactOnce;
GC.Collect(2, GCCollectionMode.Forced, true, true);
(for older but sometimes doesn't work)
GC.Collect(2, GCCollectionMode.Forced);
The third param in 4.6 version of GC.Collect() is to tell runtime whether to do garbage collecting immediately. In older versions, GC.Collect() only notifies and leaves the decision to runtime.
As for some programming advice, I suggest you could wrap a class for one query. The class could be explicitly disposed after a query is done. It may help make GC smarter.
Finally, indeed there are something in .NET framework which you need to manage yourself. Like Bitmap.GetHBitmap, they need to be disposed manually.

Related

Memory leak because of pinned GC handles / no gc root visible

What is the reason for pinned GC handles when working with unmanaged .net components? This happens from time to time without any code changed or something else. When investigating the issue, I see a lot of pinned GC-Handles
These handles seem to stick in the memory for the entire application lifetime. In this case, the library is GdPicture (14). Is there any way to investigate why those instances are not cleaned up? I'm using Dispose()/using everywhere and can't find any GC roots in the managed code.
Thanks a lot!
EDIT
Another behaviour that is strange is, that the task manager shows that the application uses about 6GB ram, when the memory profiler shows the usage of 400MB (red line is live bytes)
What is the reason for pinned GC handles when working with unmanaged .net components?
Pinning is needed when working with unmanaged code. It prevents objects from being moved during garbage collection so that the unmanaged code can have a pointer to it. The garbage collector will update all .NET references, but it will not update unmanaged pointer values.
Is there any way to investigate why those instances are not cleaned up?
No. The reason always is: there's a bug in the code. Either your code (assume that first) or in a 3rd party library (libraries are used often, chances are that leaks in the library have been found by someone else before).
I'm using Dispose()/using everywhere
Seems like you missed one or it's not using the disposable pattern.
Another behaviour that is strange is, that the task manager shows that the application uses about 6GB ram, when the memory profiler shows the usage of 400MB (red line is live bytes)
A .NET memory profiler may only show the .NET part of memory (400 MB) and omit the rest (5600 MB).
Task manager is not interested in .NET. It cares about physical RAM mostly, which is why Task Manager is not a good analytics tool in general. You don't want to analyze physical RAM, you want to analyze virtual memory.
To look for memory leaks, use Process Explorer and show the "Private Bytes" and "Virtual size" column. Process Explorer can also show you a graph over time per process.
How to proceed?
Forget about the unmanaged leak for a moment. Use a .NET profiler that has the capability of taking memory snapshots and allows you to see each individual object inside as well as a statistics.
Try to figure out the steps that it takes to create more leaks in a consistent way. Then
Take a snapshot
Repeat the leak procedure 10 times
Take a snapshot
Repeat the leak procedure another 10 times
Take a snaphot
Compare snapshot of step 1 and 3. Check for managed types that differ in multiples of 10. Compare snapshot of step 3 and 5. Check the same type again. It must be a multiple of 10. You can't leak 7 objects when you run a method 10 times.
Do a code review on the places where the affected types are used based on internal knowledge on the leak procedure (which methods are called) and the managed type. Make sure it's disposed or released properly.

Chronic inappropriate System.OutOfMemoryException occurrences

Here's a problem with what should be a continuously-running, unattended console app: I'm seeing too-frequent app exits from System.OutOfMemoryException being thrown from a wide variety of methods deep in the call stack -- often System.String.ToCharArray(), or System.String.CtorCharArrayStartLength(), or System.Xml.XmlTextReaderImpl.InitTextReaderInput(), but sometimes down in a System.String.Concat() call in a MongoCollection.Save() call's stack, and other unlikely places.
For what it's worth, we're using parallel tasks, but this is essentially the only app running on the server, and the app's total thread count never gets over 60. In some of these cases I know of a reason for some other exception to be thrown, but OutOfMemoryException makes no sense in these contexts, and it creates problems:
According to TaskManager and Perfmon logs, the system has had a minimum of 65% out of 8GB free memory when this has happened, and
While exception handlers sometimes fire & log the exception, they do not prevent an app crash, and
There's no continuing from this exception without user interaction (unless you suppress windows error reporting, which isn't what we want system-wide, or run the app as a service, which is possible but sub-optimal for our use-case)
So I'm aware of the workarounds mentioned above, but what I'd really like is some explanation -- and ideally a code-based handler -- for the unexpected OOM exceptions, so that we can engage appropriate continuation logic. Any ideas?
Getting that exception when using under 3GB of memory suggests that you are running a 32-bit app. Build it as a 64-bit app and it will be able to use as much memory as is available (close to 8GB).
As to why it's failing in the first place...how large is the data are you working with? If it's not very large, have you looked for references to data being kept around much longer than they are necessary (i.e. a memory leak), thus preventing proper GC?
You need to profile the application, but the most common reason for these exceptions is excessive string creation. Also, excessive serialization can cause this and excessive Xslt transformations.
Do you have a lot of objects larger or equal to 85000 bytes? Every such object will go to the Large Object Heap, which is not compacted. I.e. unlike Small Object Heap, GC will not move objects around to fill the memory holes, which can lead to fragmentation, which is a potential problem for long-lived applications.
As of .NET 4, this is still the case, but it seems they made some improvements in .NET 4.5.
A quick-and-dirty workaround is to make sure the application can use all the available memory by building it as "x64" or "Any CPU", but the real solution would be to minimize repeated allocation/deallocation cycles of large objects (i.e. use object pooling or avoid large objects altogether, if possible).
You might also want to look at this.

Reclaim string memory quickly (device with limited RAM)

I am writing a C# app for a device with limited ram. (Mono on iPhone/iPad)
When I assign a large string:
string xml = "10 meg xml string from REST service";
then clear it
xml = null;
Is with the GC free up that memory asap? Is there a way to make sure it's cleaned up. GC has a collect feature, but is this executed right away?
The problem is that I am downloading many large xml files in a loop and even though I am setting the string to null, memory use is growing.
In general GC does not happen immediately because garbage collection is relatively expensive. The runtime generally tries to do it while it's not busy doing other things, but this clearly isn't always possible. I ran into a non-deterministic out of memory error at one point because it was putting off GC too long (sometimes) while I was running a tight memory intensive loop. Lesson: Generally the collector knows what it is doing and you don't need to tweak it. But not always.
To force a collection to happen, you need two lines:
GC.Collect();
GC.WaitForPendingFinalizers();
EDIT: Wrote this before I saw your note on what you were running on. This is written based on the desktop .NET machine. It may (or may not) be different on mono / iPad.
First, keeping large strings or large arrays in memory at once should be avoided, especially on memory constrained devices like phones. Use XmlTextReader, for example, to parse xml files. If you get them from the network, save them to disk, etc.
Next, the issue about garbage collection: the current Mono GC does a conservative scan of the thread stacks, which means that some pointers to objects may be still be visible to the GC, even if to the programmer they have been cleared (like setting to null in your example).
To limit the consequences of this behavior, you should try to allocate or otherwise manipulate big arrays and strings in a separate stack frame. For example, instead of coding it this way:
while (true) {
string s = get_big_string_from_network ();
do_something_with_string(s);
handle_ui ();
s = null;
}
do the following:
void manipulate_big_string() {
string s = get_big_string_from_network ();
do_something_with_string(s);
}
...
while (true) {
manipulate_big_string ();
handle_ui ();
}
Normally, setting a reference to null has the intended effect only when applied to a static or instance field, using it with a method local variable may not be enough to hide the reference from the GC.
I think if you are developing for iPhone you don't have the gargabe collector from the .net framework. The memory management is made by the operating system in this case the iOS.
I think you should check in mono documentation in order to find how to manage the memory in this case. XCode implemented and automatic object management called automatic reference counting and is not a garbage collection like the one in .net framework is just an automatic tool to release unused objects.
Now thinking just in .net when working with big strings, you should always use stringbuilder instead of just string.
Now thinking in the iOS you sould not compare the app written for the iOs enviroment with a Desktop application for windows (in a pc you have a lot more resources). The iOS will not allow a big hit in memory consumption, if an app do this, the operating system will close it automatically to mantain the system running.
While I'm not a Mono expert, but some simple things to check. While you noted that you are setting your variable to null, are you actually calling a .Close() or .Dispose() method as appropriate, or including the scope of the variable in question inside of a using block?
Could be a case of a problem where you're sticking around waiting for a finalizer (i.e. if you have a handle on an unmalaged resource like a file handle), or the variable is still stuck in scope for some reason. This would result in increased memory pressure.
Ideally, handle opening your file one at a time in a method with explicitly clear variable scope in combination with a using block, thus ensuring that the appropriate finalizers, etc. are called even if an exception is thrown.
Hope that helps!

Limiting the size of the managed heap in a C# application

Can I configure my C# application to limit its memory consumption to, say, 200MB?
IOW, I don't want to wait for the automatic GC (which seems to allow the heap to grow much more than actually needed by this application).
I know that in Java there's a command line switch you can pass to the JVM that achieves this.. is there an equivalent in C#?
p.s.
I know that I can invoke the GC from code, but that's something I would rather not have to do periodically. I'd rather set it once upon startup somehow and forget it.
I am not aware of any such options for the regular CLR. I would imagine that you can control this if you implement your own CLR host.
However, I disagree in your assessment of how the heap grows. The heap grows because your application is allocating and holding on to objects.
There are a couple of things you can do to limit the memory usage. Please see this question for some input: Reducing memory usage of .NET applications?
The memory manager lets the host
provide an interface through which the
CLR will request all memory
allocations. It replaces both the
Windows® memory APIs and the standard
C CLR allocation routines. Moreover,
the interface allows the CLR to inform
the host of the consequences of
failing a particular allocation (for
example, failing a memory allocation
from a thread holding a lock may have
certain reliability consequences). It
also permits the host to customize the
CLR's response to a failed allocation,
ranging from an OutOfMemoryException
being thrown all the way up through
the process being torn down. The host
can also use this manager to recapture
memory from the CLR by unloading
unused app domains and forcing garbage
collection. The memory manager
interfaces are listed in
Source and More : http://msdn.microsoft.com/en-us/magazine/cc163567.aspx#S2
Edit :
I actually wouldn't recommend you to manage memory by yourself, because more you get in, more problems you will probably run into, instead CLR does that mission perfectly.
But if you say it's very important for me to handle things on my own, then I can't anything.
I haven't tried this out, but you could attempt to call SetProcessWorkingSetSizeEx passing in the right flags to enforce that your process never gets more than so much memory. I don't know if the GC will take this into account and clean up more often, or if you'll just get OutOfMemoryExceptions.

Causes for web service memory leak

We have a web service that uses up more and more private bytes until that application stops responding. The managed heap (mostly Gen2) will show some 200-250 MB, while private bytes shows over 1GB. What are possible causes of a memory leak outside of the managed heap?
I've already checked for the following:
Prolific dynamic assemblies (Xml serialization, regex, etc.)
Session state (turned off)
System.Policy.Evidence memory leak (SP1 installed)
Threading deadlock (no use of Join, only lock)
Use of SQLOLEDB (using SqlClient)
What other sources can I check for?
Make sure your app is complied in release mode. If you compile under debug mode, and deploy that, simply instantiating a class that has an event defined (event doesn't even need to be raised), will cause a small piece of memory to leak. Instantiating enough of these objects over a long enough period of time will cause all the memory to be used. I've seen web apps that would use up all the memory within a matter of hours, simply because a debug build was used. Compiling as a release build immediately and permanently fixed the problem.
I would recommend you view snapshots of the stack at various times, and see what's using up the memory. If your application is using Java, then jmap works extremely well - you just give it the PID of the java process.
If using something else, try Lambda Probe (http://www.lambdaprobe.org/d/index.htm). It doesn't show as much detail, but will at least show you memory use.
I had a bad memory leak in my JDBC code that ended up being traced to a change in the JDBC specification a few years ago that I missed (with respect to closing statements and such). It took a combination of Lamdba Probe and then jmap to localize the problem enough to fix it.
Cheers,
-R
Also look for:
COM Assemblies being loaded
DB Connections not being closed
Cache & State (Session, Application)
Try forcing the Garbage Collector (GC) to run (write a page that does it when it loads) or try the instrumentation, but that's a bit hit and miss in my experience. Another thing would be to keep it running and see if it runs out of memory.
What could be happening is that there is plenty of memory and Windows does not signal your app to clean up. This causes the app to look like its using more and more memory because it can, when in fact the system can reclaim the memory when it needs. SQL Server and Exchange do this a lot. The idea is why cause a unnecessary cleanup when there are plenty of resources.
Rob
Garbage collection does not run until a request for memory is denied due to lack of available memory. This can often make things look like a memory leak when one is not around.
Do you have any events and event handlers within the service? Services often have static variables, and if you are creating event handlers from the static instances, connected to a non-static instance object, the static will hold a reference to the instance forever, which will stop it from releasing.
Double check that trace is not enabled. I've seen instances of trace slowly consuming memory until the app reaches it's app pool limit.
For what it is worth, my issue was not with the service, but the with HttpClient that was calling it.
The client was not properly disposed, so it kept the connection open and the memory locked.
After disposing the client the service released the memory as expected.

Categories

Resources