Requesting memory for your application - c#

I am having a similar issue to this person. The primary difference being the application is NOT meant for a developer environment, and therefore I need to know how to optimize the space used by Sql Server (possibly per machine based on specs).
I was intrigued by Ricardo C's answer, particularly the following:
Extracted fromt he SQL Server
documentation:
Maximum server memory (in MB)
Specifies the maximum amount of memory SQL Server can allocate when it
starts and while it runs. This
configuration option can be set to a
specific value if you know there are
multiple applications running at the
same time as SQL Server and you want
to guarantee that these applications
have sufficient memory to run. If
these other applications, such as Web
or e-mail servers, request memory only
as needed, then do not set the option,
because SQL Server will release memory
to them as needed. However,
applications often use whatever memory
is available when they start and do
not request more if needed. If an
application that behaves in this
manner runs on the same computer at
the same time as SQL Server, set the
option to a value that guarantees that
the memory required by the application
is not allocated by SQL Server.
My question is: how does an application request memory from the OS when it needs it? Is this something built into compilation or something managed by the developer? The two primary apps running on this machine are Sql Server and the (fairly heavyweight) C# application I'm developing, and I'm almost certain we didn't specifically do anything in the realm of asking the OS for memory. Is there a correct/necessary way to do this?

Some applications allocate a lot of memory at startup, and then run their own memory management system on it.
This can be good for applications that have particular allocation patterns, and that feel they can do a better job than the more generic memory manager provided by the runtime system.
Many games do this, since they often have a very good idea of how their memory usage pattern is going to look, and often are heavily optimized. The default/system allocator is general-purpose and not always fast enough. Doom did this, and is fairly well-known for it and of course its code is available and widely discussed.
In "managed" languages like C# I think this is very rare, and nothing you need to worry about.

Each time you create a new object, you are asking the .NET garbage collector to give you memory. If the GC has insufficient memory on the managed heap then it will ask the OS for more.
As the other question says, although SQL server is meant to give the memory back it doesn't seem to do it very well. There is not going to be any hard and fast rules here, you will have to guess at some setting for SQL server and then test the performance. If you post some information about the server, database size, how much memory your application seems to require then I am sure people wil be happy to give you some suggestions for a starting configuration.
One warning though, I think changing its memory limits requires a service re-start.

It will depend on a few things - in particular the Operating System, and the language used.
For instance, under MacOS Classic, it was impossible to have more memory allocated after startup - we used to have to go and modify how much memory was allocated using the Finder, and then restart the application. Those were the bad old days.
Modern operating systems will allow for running processes to request more memory - for instance in C, you can use alloc(), malloc(), realloc() or similar to request chunks of memory. In dynamic languages, you just create objects or variables, and more memory is allocated.
In java, there is a limit as to how much memory the JVM has access to - and this can be changed by only restarting the JVM, and passing some arguments to it (sounds like the bad old days, doesn't it?).
In Objective-C, in addition to the malloc() family of functions, you can also create objects on the heap, using
[object alloc];
which is more often seen as
[[object alloc] init];
Note that this is slightly different to creating objects on the stack - if you are serious about programming learning the difference between these two might be useful, too :)
In summary - the programmer needs to ask the OS for more memory. This can be implicit (in dynamic languages, by creating objects, or by creating objects on the heap) or explicitly, such as in C by using alloc()/malloc()/etc.

Related

How can I obtain a memory address for another C# app to read?

I want to use 2 C# apps to communicate to each other by memory. For the host side I used:
int* ptr = &level;
And for the client side I want to use:
ReadProcessMemory((int)_handle, <returned result of int* ptr>, _buffer, 2, ref bytesRead);
But the ReadProcessMemory doesn't work. For example: level set to 3 but ReadProcessMemory returns 0. What the hell out of this? (NOTE: "level" field is not cleared from memory)
I tried int* ptr lots of times because lots of websites tell me to do that but that doesn't work so well with ReadProcessMemory.
I set level = 3 but the result of ReadProcessMemory of level = 0
What you ask, in the way you ask is pretty much dangerous as the process is entirely managed by the CLR. Depending what you want to share and how, you could consider sockets or pipes.
Alternatively, you could use interop, but it requires a certain expertise and tinkering in my opinion.
The cleanest way for two C# applications to communicate via memory is by use memory mapped files. Messing with the memory in a managed process can get subtle issues. Memory mapped files are a way to share information.
Keep in mind that each memory mapped file is loaded at different memory addresses, therefore you need to structure it without the use of absolute pointers.
Edit:
Direct raw memory access requires knowing the exact physical address to access as the virtual addresses allocated in the target process are different and possibly overlapping with those of the source process. C# applications are hosted by the Common Language Runtime which is in control of everything, including memory allocation. In particular, a standard c# application di not manage a bit of its own memory: as the runtime moves the objects as part of the normal application lifetime, such addresses change over time.
If you are in control of the target application, you can pin the object via the GC class to forbid movements, then you have to take the address of the object, and pass it to the other process. The other process must then open the target process for reading, mapping the memory segments, calculate the location of the memory to read by translating the virtual address.
What you ask requires cooperating processes and a lot of low level knowledge, and in the end, you also might never be able to read updated memory changes, as the CLR might not write back the values to memory (have a look to volatile for this).
It is clearing exciting to write such software, but when you are in control of both the applications, there are cleaner and much more reliable ways to achieve your goal.
As a side note, this technique is used by trainers, hacker tools and viruses, therfore antivirus softwares will raise red flags when they see such behavior.

does mono/.Net GC release free allocated memory back to OS after collection? if not, why?

I heard many times that once C# managed program request more memory from OS, it doesn't free it back, unless system is out of memory. Eg. when object is collected, it gets deleted, and memory that was occupied by the object is free to reuse by another managed object, but memory itself is not returned to operating system (for example, mono on unix wouldn't call brk / sbrk to decrease the amount of virtual memory available to the process back to what it was before its allocation).
I don't know if this really happens or not, but I can see that my c# applications, running on linux, use small amount of memory on beginning, then when I do something memory expensive, it allocates more of it, but later on when all objects get deleted (I can verify that by putting debug message to destructors), the memory is not free'd. On other hand no more memory is allocated when I run that memory expensive operation again. The program just keep on eating the same amount of memory until it is terminated.
Maybe it is just my misunderstanding of how GC in .net works, but if it really does work like this, why is that? What is a benefit of keeping the allocated memory for later, instead of returning it back to the system? How can it even know if system need it back or not? What about other application that would crash or couldn't start because of OOM caused by this effect?
I know that people will probably answer something like "GC manages memory better than you ever could, just don't care about it" or "GC knows what it does best" or "it doesn't matter at all, it's just virtual memory" but it does matter, on my 2gb laptop I am running OOM (and kernel OOM killer gets started because of that) very often when I am running any C# applications after some time precisely because of this irresponsible memory management.
Note: I was testing this all on mono in linux because I really have hard times understanding how windows manage memory, so debugging on linux is much easier for me, also linux memory management is open source code, memory management of windows kernel / .Net is rather mystery for me
The memory manager works this way because there is no benefit of having a lot of unused system memory when you don't need it.
If the memory manager would always try to have as little memory allocated as possible, that would mean that it would do a lot of work for no reason. It would only slow the application down, and the only benefit would be more free memory that no application is using.
Whenever the system needs more memory, it will tell the running applications to return as much as possible. The same signal is also sent to an application when you minimise it.
If this doesn't work the same with Mono in Linux, then that is a problem with that specific implementation.
Generally, if an app needs memory once, it will need it again. Releasing memory back to the OS only to request it back again is overhead, and if nothing else wants the memory: why bother?. It is trying to optimize for the very likely scenario of wanting it again. Additionally, releasing it back requires entire / contiguous blocks that can be handed back, which has very specific impact on things like compaction: it isn't quite as simple as "hey, I'm not using most of this : have it back" - it needs to figure out what blocks can be released, presumably after a full collect and compact (relocate objects etc) cycle.

How to prevent or minimize the negative effects of .NET GC in a real time app?

Are there any tips, tricks and techniques to prevent or minimize slowdowns or temporary freeze of an app because of the .NET GC?
Maybe something along the lines of:
Try to use structs if you can, unless the data is too large or will be mostly used inside other classes, etc.
The description of your App does not fit the usual meaning of "realtime". Realtime is commonly used for software that has a max latency in milliseconds or less.
You have a requirement of responsiveness to the user, meaning you could probably tolerate an incidental delay of 500 ms or more. 100 ms won't be noticed.
Luckily for you, the GC won't cause delays that long. And if it did you could use the Server (background) version of the GC, but I know little about the details.
But if your "user experience" does suffer, it probably won't be the GC.
IMHO, if the performance of your application is being affected noticeably by the GC, something is wrong. The GC is designed to work without intervention and without significantly affecting your application. In other words, you shouldn't have to code with the details of the GC in mind.
I would examine the structure of your application and see where the bottlenecks are, maybe using a profiler. Maybe there are places where you could reduce the number of objects that are being created and destroyed.
If parts of your application really need to be real-time, perhaps they should be written in another language that is designed for that sort of thing.
Another trick is to use GC.RegisterForFullNotifications on back-end.
Let say, that you have load balancing server and N app. servers. When load balancer recieves information about possible full GC on one of the servers it will forward requests to other servers for some time therefore SLA will not be affected by GC (which is especially usefull for x64 boxes where more than 4GB can be addressed).
Updated
No, unfortunately I don't have a code but there is a very simple example at MSDN.com with dummy methods like RedirectRequests and AcceptRequests which can be found here: Garbage Collection Notifications

Hitting a memory limit slows down the .Net application

We have a 64bit C#/.Net3.0 application that runs on a 64bit Windows server. From time to time the app can use large amount of memory which is available. In some instances the application stops allocating additional memory and slows down significantly (500+ times slower).When I check the memory from the task manager the amount of the memory used barely changes. The application keeps on running very slowly and never gives an out of memory exception.
Any ideas? Let me know if more data is needed.
You might try enabling server mode for the Garbage Collector. By default, all .NET apps run in Workstation Mode, where the GC tries to do its sweeps while keeping the application running. If you turn on server mode, it temporarily stops the application so that it can free up memory (much) faster, and it also uses different heaps for each processor/core.
Most server apps will see a performance improvement using the GC server mode, especially if they allocate a lot of memory. The downside is that your app will basically stall when it starts to run out of memory (until the GC is finished).
* To enable this mode, insert the following into your app.config or web.config:
<configuration>
<runtime>
<gcServer enabled="true"/>
</runtime>
</configuration>
The moment you are hitting the physical memory limit, the OS will start paging (that is, write memory to disk). This will indeed cause the kind of slowdown you are seeing.
Solutions?
Add more memory - this will only help until you hit the new memory limit
Rewrite your app to use less memory
Figure out if you have a memory leak and fix it
If memory is not the issue, perhaps your application is hitting CPU very hard? Do you see the CPU hitting close to 100%? If so, check for large collections that are being iterated over and over.
As with 32-bit Windows operating systems, there is a 2GB limit on the size of an object you can create while running a 64-bit managed application on a 64-bit Windows operating system.
Investigating Memory Issues (MSDN article)
There is an awful lot of good stuff mentioned in the other answers. However, I'm going to chip in my two pence (or cents - depending on where you're from!) anyway.
Assuming that this is indeed a 64-bit process as you have stated, here's a few avenues of investigation...
Which memory usage are you checking? Mem Usage or VMem Size? VMem size is the one that actually matters, since that applies to both paged and non-paged memory. If the two numbers are far out of whack, then the memory usage is indeed the cause of the slow-down.
What's the actual memory usage across the whole server when things start to slow down? Does the slow down also apply to other apps? If so, then you may have a kernel memory issue - which can be due to huge amounts of disk accessing and low-level resource usage (for example, create 20000 mutexes, or load a few thousand bitmaps via code that uses Win32 HBitmaps). You can get some indication of this on the Task Manager (although Windows 2003's version is more informative directly on this than 2008's).
When you say that the app gets significantly slower, how do you know? Are you using vast dictionaries or lists? Could it not just be that the internal data structures are getting so big so as to complicate the work any internal algorithms are performing? When you get to huge numbers some algorithms can start to become slower by orders of magnitude.
What's the CPU load of the application when it's running at full-pelt? Is actually the same as when the slow-down occurs? If the CPU usage decreases as the memory usage goes up, then that means that whatever it's doing is taking the OS longer to fulfill, meaning that it's probably putting too much load on the OS. If there's no difference in CPU load, then my guess is it's internal data structures getting so big as to slow down your algos.
I would certainly be looking at running a Perfmon on the application - starting off with some .Net and native memory counters, Cache hits and misses, and Disk Queue length. Run it over the course of the application from startup to when it starts to run like an asthmatic tortoise, and you might just get a clue from that as well.
Having skimmed through the other answers, I'd say there's a lot of good ideas. Here's one I didn't see:
Get a memory profiler, such as SciTech's MemProfiler. It will tell you what's being allocated, by what, and it will show you the whole slice n dice.
It also has video tutorials in case you don't know how to use it. In my case, I discovered I had IDisposable instances that I wasn't Using(...)

Limiting the size of the managed heap in a C# application

Can I configure my C# application to limit its memory consumption to, say, 200MB?
IOW, I don't want to wait for the automatic GC (which seems to allow the heap to grow much more than actually needed by this application).
I know that in Java there's a command line switch you can pass to the JVM that achieves this.. is there an equivalent in C#?
p.s.
I know that I can invoke the GC from code, but that's something I would rather not have to do periodically. I'd rather set it once upon startup somehow and forget it.
I am not aware of any such options for the regular CLR. I would imagine that you can control this if you implement your own CLR host.
However, I disagree in your assessment of how the heap grows. The heap grows because your application is allocating and holding on to objects.
There are a couple of things you can do to limit the memory usage. Please see this question for some input: Reducing memory usage of .NET applications?
The memory manager lets the host
provide an interface through which the
CLR will request all memory
allocations. It replaces both the
Windows® memory APIs and the standard
C CLR allocation routines. Moreover,
the interface allows the CLR to inform
the host of the consequences of
failing a particular allocation (for
example, failing a memory allocation
from a thread holding a lock may have
certain reliability consequences). It
also permits the host to customize the
CLR's response to a failed allocation,
ranging from an OutOfMemoryException
being thrown all the way up through
the process being torn down. The host
can also use this manager to recapture
memory from the CLR by unloading
unused app domains and forcing garbage
collection. The memory manager
interfaces are listed in
Source and More : http://msdn.microsoft.com/en-us/magazine/cc163567.aspx#S2
Edit :
I actually wouldn't recommend you to manage memory by yourself, because more you get in, more problems you will probably run into, instead CLR does that mission perfectly.
But if you say it's very important for me to handle things on my own, then I can't anything.
I haven't tried this out, but you could attempt to call SetProcessWorkingSetSizeEx passing in the right flags to enforce that your process never gets more than so much memory. I don't know if the GC will take this into account and clean up more often, or if you'll just get OutOfMemoryExceptions.

Categories

Resources