Is GCLatencyMode.LowLatency a good choice? - c#

I have a C# windows service acting as a server, the service holds some large (>8Gb) data structures in memory and exposes search methods to clients via remoting.
The avg search operation is executed in <200ms and the service handles up to 20 request/sec.
I'm noticing some serious performance degradation (>6000ms) on a regular basis for few seconds
My best guess is that the server threads are stopped by a gen2 garbage collection from time to time.
I'm considering switching from server gc to workstation gc and wrap my search method in this to prevent GC during requests.
static protected void DoLowLatencyAction(Action action)
{
GCLatencyMode oldMode = GCSettings.LatencyMode;
try
{
GCSettings.LatencyMode = GCLatencyMode.LowLatency;
// perform time-sensitive actions here
action();
}
finally
{
GCSettings.LatencyMode = oldMode;
}
}
Is this a good idea?
Under what conditions the GC will be performed anyway inside the low latency block?
Note: I'm running on a x64 server with 8 cores
Thanks

I've not used GCLatencyMode before so cannot comment on whether using it is a good idea or not.
However, are you sure you are using server GC? By default, Windows services use workstation GC.
I've had a similar problem before in a Windows service, and setting server GC mode using:
<configuration>
<runtime>
<gcServer enabled="true" />
</runtime>
</configuration>
in the service's app.config file solved it.
Have a read of this post from Tess Ferrandez's blog for a more details.

I am surprised that your search method even trigger a GC, if the 8GB datastructure is static ( not modified a lot by adding or removing from it ), and all you do is searching it, then you should just try to avoid allocating temporary objects within your search method ( if you can ). Creating objects is the thing that triggers a GC, and if you have a data structure that is rarely modified, then it makes sense to avoid the GC all together ( or delay it as much as possible ).
GCSettings.LowLatency does is it gives a hint to the GC to do eager collections such that we can avoid Gen2 collections while the LowLatency mode is set. What this does as a side effect is make the GC eager to collect outside the region where LowLatency mode is set, and could result in lower performance ( in this case you can try server GC mode ).

This doesn't sound like a great idea. At some point the GC will ignore your hint and do the collection anyway.
I believe you will have much better success by actually profiling and optimizing your service. Run PerfView (free, awesome, Microsoft tool) on your server while it is running. See who owns troublesome objects and how long specific long running GC events are taking.

Related

Supressing GC in .NET for Unit Test performance [duplicate]

I have a high performance application that is handling a very large amount of data. It is receiving, analysing and discarding enormous amounts of information over very short periods of time. This causes a fair amount of object churn that I am currently trying to optimize, but it also causes a secondary problem. When Garbage Collection kicks in it can cause some long delays as it cleans things up (by long I mean 10s to 100s of milliseconds). 99% of the time this is acceptable, but for brief windows of time about 1-2 minutes long I need to be absolutely sure that Garbage Collection does not cause a delay. I know when these periods of time will occur beforehand and I just need a way to make sure that Garbage collection doesn't happen during this period. The application is written in C# using .NET 4.0 Framework and uses both managed and unmanaged code if that matters.
My questions are;
Is it possible to briefly pause Garbage Collection for the entire program?
Is it possible to use System.GC.Collect() to force garbage collection before the window I need free of Garbage Collection and if I do how long will I be Garbage Collection free?
What advice do people have on minimizing the need for Garbage Collection overall?
Note - this system is fairly complex with lots of different components. I am hoping to avoid going to a approach where I have to implement a custom IDisposable interface on every class of the program.
.NET 4.6 added two new methods: GC.TryStartNoGCRegion and GC.EndNoGCRegion just for this.
GCLatencyMode oldMode = GCSettings.LatencyMode;
// Make sure we can always go to the catch block,
// so we can set the latency mode back to `oldMode`
RuntimeHelpers.PrepareConstrainedRegions();
try
{
GCSettings.LatencyMode = GCLatencyMode.LowLatency;
// Generation 2 garbage collection is now
// deferred, except in extremely low-memory situations
}
finally
{
// ALWAYS set the latency mode back
GCSettings.LatencyMode = oldMode;
}
That will allow you to disable the GC as much as you can. It won't do any large collections of objects until:
You call GC.Collect()
You set GCSettings.LatencyMode to something other than LowLatency
The OS sends a low-memory signal to the CLR
Please be careful when doing this, because memory usage can climb extremely fast while you're in that try block. If the GC is collecting, it's doing it for a reason, and you should only seriously consider this if you have a large amount of memory on your system.
In reference to question three, perhaps you can try reusing objects like byte arrays if you're receiving information through filesystem I/O or a network? If you're parsing that information into custom classes, try reusing those too, but I can't give too much good advice without knowing more about what exactly you're doing.
Here are some MSDN articles that can help too:
Latency Modes
Constrained Execution Regions (this is why we call PrepareConstrainedRegions())
Note: GCSettings.LatencyMode = GCLatencyMode.LowLatency can only be set if GCSettings.IsServerGC == false. IsServerGC can be changed in App.config:
<runtime>
<gcServer enabled="false" />
</runtime>

Limiting the allowed RAM for a service, possible using MaxWorkingSet

I have a service that runs on a domain controller that is randomly accessed by other computers on the network. I can't shutdown the service and run it only when needed (this would defeat the purpose of running it as a service anyway).
The problem is that the memory used by the service doesn't seem to ever get cleared, and increases every time the service is queried by a remote computer.
Is there a way to set a limit on the RAM used by the application?
I've found a few references to using MaxWorkingSet, but none of the references actually tell me how to use it. Can I use MaxWorkingSet to limit the RAM used to, for example, 35MB? and if so, how? (what is the syntax etc?)
Otherwise, is there a function like "clearall()" that I could use to reset the variables and memory at the end of each run through? I've tried using GC.Collect(), but it didn't work.
Literally, MaxWorkingSet only affect Working set, which is the amount of physical memory. To restrict of an overall memory usage, you need Job Object API. But it is danger if your program really need such memory (many codes don't consider an OutOfMemoryException and sometimes .NET runtime has strange behaviors when memory is not enough)
You need to:
Create a Win32 Job object
Set the maximum memory to the job
Assign your process to the job
Here is a wrapper for .NET. ^reference
Besides, you could try this method of GC: (for .NET 4.6 or newer)
GCSettings.LargeObjectHeapCompactionMode = GCLargeObjectHeapCompactionMode.CompactOnce;
GC.Collect(2, GCCollectionMode.Forced, true, true);
(for older but sometimes doesn't work)
GC.Collect(2, GCCollectionMode.Forced);
The third param in 4.6 version of GC.Collect() is to tell runtime whether to do garbage collecting immediately. In older versions, GC.Collect() only notifies and leaves the decision to runtime.
As for some programming advice, I suggest you could wrap a class for one query. The class could be explicitly disposed after a query is done. It may help make GC smarter.
Finally, indeed there are something in .NET framework which you need to manage yourself. Like Bitmap.GetHBitmap, they need to be disposed manually.

Why is server garbage collection faster if I'm not collecting garbage?

I'm writing a data structure and if I set <gcServer enabled="true" /> in my app.config file, the program adds 500,000 items in 200 milliseconds. If I set <gcServer enabled="false" /> it takes 300 milliseconds. That is, setting this flag to false makes it take 50% longer consistently, as measured by a Stopwatch.
I'm wondering why this is because I am not doing any garbage collecting. I know it is done automatically sometimes but after profiling with CLRProfiler, I can confirm 0 collections are occurring:
Does anyone know why this is happening? If the garbage collector isn't even running, then why is a server garbage collector so much faster? Here is the code where I am checking the speed differences:
Stopwatch sw = Stopwatch.StartNew();
foreach (string s in items)
{
dataStructure.Add(s, s + "a");
}
sw.Stop();
The are five things here I need to cover.
First is understanding this flag does not turn the garbage collector on or off, but rather merely determines which mode it uses. Both examples have garbage collection enabled; only the first (slower) example used server garbage collection.
Second is understanding just because no collections were performed, it does not mean the GC never stopped to check whether it needs to run a collection.
Third is this excerpt from the gcServer docs:
For single-processor computers, the default workstation garbage collection should be the fastest option.
Fourth is understanding "single processor" in the above context really does refer to processors, and not to cores.
Put all this together, and the behavior from the question exactly matches the documentation. Move along; nothing to see here.
Fifth, and finally, is understanding the values at play and how server loads differ from desktop loads. The primary importance in a server system is stability. This is true even over performance; performance doesn't matter if the system crashes all the time. Additionally, servers are more likely to run workloads where a process has a long life-time... even months or years at a time without restarting. So a server admin may sacrifice some performance to get a GC that runs a little more often and runs a more complete check, which may include memory compaction to reclaim address space, and thus avoid issues with OutOfMemoryExceptions that can otherwise cause problems in long-lived .Net applications.

Causes for web service memory leak

We have a web service that uses up more and more private bytes until that application stops responding. The managed heap (mostly Gen2) will show some 200-250 MB, while private bytes shows over 1GB. What are possible causes of a memory leak outside of the managed heap?
I've already checked for the following:
Prolific dynamic assemblies (Xml serialization, regex, etc.)
Session state (turned off)
System.Policy.Evidence memory leak (SP1 installed)
Threading deadlock (no use of Join, only lock)
Use of SQLOLEDB (using SqlClient)
What other sources can I check for?
Make sure your app is complied in release mode. If you compile under debug mode, and deploy that, simply instantiating a class that has an event defined (event doesn't even need to be raised), will cause a small piece of memory to leak. Instantiating enough of these objects over a long enough period of time will cause all the memory to be used. I've seen web apps that would use up all the memory within a matter of hours, simply because a debug build was used. Compiling as a release build immediately and permanently fixed the problem.
I would recommend you view snapshots of the stack at various times, and see what's using up the memory. If your application is using Java, then jmap works extremely well - you just give it the PID of the java process.
If using something else, try Lambda Probe (http://www.lambdaprobe.org/d/index.htm). It doesn't show as much detail, but will at least show you memory use.
I had a bad memory leak in my JDBC code that ended up being traced to a change in the JDBC specification a few years ago that I missed (with respect to closing statements and such). It took a combination of Lamdba Probe and then jmap to localize the problem enough to fix it.
Cheers,
-R
Also look for:
COM Assemblies being loaded
DB Connections not being closed
Cache & State (Session, Application)
Try forcing the Garbage Collector (GC) to run (write a page that does it when it loads) or try the instrumentation, but that's a bit hit and miss in my experience. Another thing would be to keep it running and see if it runs out of memory.
What could be happening is that there is plenty of memory and Windows does not signal your app to clean up. This causes the app to look like its using more and more memory because it can, when in fact the system can reclaim the memory when it needs. SQL Server and Exchange do this a lot. The idea is why cause a unnecessary cleanup when there are plenty of resources.
Rob
Garbage collection does not run until a request for memory is denied due to lack of available memory. This can often make things look like a memory leak when one is not around.
Do you have any events and event handlers within the service? Services often have static variables, and if you are creating event handlers from the static instances, connected to a non-static instance object, the static will hold a reference to the instance forever, which will stop it from releasing.
Double check that trace is not enabled. I've seen instances of trace slowly consuming memory until the app reaches it's app pool limit.
For what it is worth, my issue was not with the service, but the with HttpClient that was calling it.
The client was not properly disposed, so it kept the connection open and the memory locked.
After disposing the client the service released the memory as expected.

How to avoid garbage collection in real time .NET application?

I'm writting a financial C# application which receive messages from the network, translate them into different object according to the message type and finaly apply the application business logic on them.
The point is that after the business logic is applied, I'm very sure I will never need this instance again. Rather than to wait for the garbage collector to free them, I'd like to explicitly "delete" them.
Is there a better way to do so in C#, should I use a pool of object to reuse always the same set of instance or is there a better strategy.
The goal being to avoid the garbage collection to use any CPU during a time critical process.
Don't delete them right away. Calling the garbage collector for each object is a bad idea. Normally you really don't want to mess with the garbage collector at all, and even time critical processes are just race conditions waiting to happen if they're that sensitive.
But if you know you'll have busy vs light load periods for your app, you might try a more general GC.Collect() when you reach a light period to encourage cleanup before the next busy period.
Look here: http://msdn.microsoft.com/en-us/library/bb384202.aspx
You can tell the garbage collector that you're doing something critical at the moment, and it will try to be nice to you.
You hit in yourself -- use a pool of objects and reuse those objects. The semantics of the calls to those object would need to be hidden behind a factory facade. You'll need to grow the pool in some pre-defined way. Perhaps double the size everytime it hits the limit -- a high water algorithm, or a fixed percentage. I'd really strongly advise you not to call GC.Collect().
When the load on your pool gets low enough you could shrink the pool and that will eventually trigger a garbage collection -- let the CLR worry about it.
Attempting to second-guess the garbage collector is generally a very bad idea. On Windows, the garbage collector is a generational one and can be relied upon to be pretty efficient. There are some noted exceptions to this general rule - the most common being the occurrence of a one-time event that you know for a fact will have caused a lot of old objects to die - once objects are promoted to Gen2 (the longest lived) they tend to hang around.
In the case you mention, you sound as though you are generating a number of short-lived objects - these will result in Gen0 collections. These happen relatively often anyway, and are the most efficient. You could avoid them by having a reusable pool of objects, if you prefer, but it is best to ascertain for certain if GC is a performance problem before taking such action - the CLR profiler is the tool for doing this.
It should be noted that the garbage collector is different on different .NET frameworks - on the compact framework (which runs on the Xbox 360 and on mobile platforms) it is a non-generational GC and as such you must be much more careful about what garbage your program generates.
Forcing a GC.Collect() is generally a bad idea, leave the GC to do what it does best. It sounds like the best solution would be to use a pool of objects that you can grow if necessary - I've used this pattern successfully.
This way you avoid not only the garbage collection but the regular allocation cost as well.
Finally, are you sure that the GC is causing you a problem? You should probably measure and prove this before implementing any perf-saving solutions - you may be causing yourself unnecessary work!
"The goal being to avoid the garbage collection to use any CPU during
a time critical process"
Q: If by time critical, you mean you're listening to some esoteric piece of hardware, and you can't afford to miss the interrupt?
A: If so then C# isn't the language to use, you want Assembler, C or C++ for that.
Q: If by time Critical, you mean while there are lots of messages in the pipe, and you don't want to let the Garbage collector slow things down?
A: If so you are worrying needlessly. By the sounds of things your objects are very short lived, this means the garbage collector will recycle them very efficiently, without any apparent lag in performance.
However, the only way to know for sure is test it, set it up to run overnight processing a constant stream of test messages, I'll be stunned if you your performance stats can spot when the GC kicks in (and even if you can spot it, I'll be even more surprised if it actually matters).
Get a good understanding and feel on how the Garbage Collector behaves, and you will understand why what you are thinking of here is not recommended. unless you really like the CLR to spend time rearranging objects in memory alot.
http://msdn.microsoft.com/en-us/magazine/bb985010.aspx
http://msdn.microsoft.com/en-us/magazine/bb985011.aspx
How intensive is the app? I wrote an app that captures 3 sound cards (Managed DirectX, 44.1KHz, Stereo, 16-bit), in 8KB blocks, and sends 2 of the 3 streams to another computer via TCP/IP. The UI renders an audio level meter and (smooth) scrolling title/artist for each of the 3 channels. This runs on PCs with XP, 1.8GHz, 512MB, etc. The App uses about 5% of the CPU.
I stayed clear of manually calling GC methods. But I did have to tune a few things that were wasteful. I used RedGate's Ant profiler to hone in on the wasteful portions. An awesome tool!
I wanted to use a pool of pre-allocated byte arrays, but the managed DX Assembly allocates byte buffers internally, then returns that to the App. It turned out that I didn't have to.
If it is absolutely time critical then you should use a deterministic platform like C/C++. Even calling GC.Collect() will generate CPU cycles.
Your question starts off with the suggestion that you want to save memory but getting rid of objects. This is a space critical optimization. You need to decide what you really want because the GC is better at optimizing this situation than a human.
From the sound of it, it seems like you're talking about deterministic finalization (destructors in C++), which doesn't exist in C#. The closest thing that you will find in C# is the Disposable pattern. Basically you implement the IDisposable interface.
The basic pattern is this:
public class MyClass: IDisposable
{
private bool _disposed;
public void Dispose()
{
Dispose( true );
GC.SuppressFinalize( this );
}
protected virtual void Dispose( bool disposing )
{
if( _disposed )
return;
if( disposing )
{
// Dispose managed resources here
}
_disposed = true;
}
}
You could have a limited amount of instances of each type in a pool, and reuse the already done with instances. The size of the pool would depend on the amount of messages you'll be processing.
Instead of creating a new instance of an object every time you get a message, why don't you reuse objects that have already been used? This way you won't be fighting against the garbage collector and your heap memory won't be getting fragmented.**
For each message type, you can create a pool to hold the instances that are not in use. Whenever you receive a network message, you look at the message type, pull a waiting instance out of the appropriate pool and apply your business logic. After that, you put that instance of the message object back into it's pool.
You will most likely want to "lazy load" your pool with instances so your code scales easily. Therefore, your pool class will need to detect when a null instance has been pulled and fill it up before handing it out. Then when the calling code puts it back in the pool it's a real instance.
** "Object pooling is a pattern to use that allows objects to be reused rather than allocated and deallocated, which helps to prevent heap fragmentation as well as costly GC compactions."
http://geekswithblogs.net/robp/archive/2008/08/07/speedy-c-part-2-optimizing-memory-allocations---pooling-and.aspx
In theory the GC shouldn't run if your CPU is under heavy load or unless it really needs to. But if you have to, you may want to just keep all of your objects in memory, perhaps a singleton instance, and never clean them up unless you're ready. That's probably the only way to guarantee when the GC runs.

Categories

Resources