I developed a simple UDP message server and client application in Windows, the server can send a message to the client but the client can't send anything, they are only listening. the problem is the client application is use quite big memory usage is about 7M when it's listening and 9M when it received a packet. Could I reduce the memory usage into at least less then 1M?
How are you measureing your memory footprint? Any managed .net application, even the smallest typically has a shared working set of around 50 MBs, the actual memory footprint of your app is much smaller than that.
Have you tried calling GC.GetTotalMemory to look at the actual managed memory usage?
Much of this burden is the overhead of running the whole CLR system, garbage collection etc. If you're super-sensitive to memory footprint (<10-20Mb) then the CLR may not be for you. Even a basic HelloWorld private working set is over 4Mb of RAM.
If you are sensitive to footprint, you might be best served by looking to a true ahead-of-time compiled language like C/C++ etc.
Related
I'm trying to write a client and server program in C#, the client sends request to server the server handle the request in threads and send response to the client.
I write the client and server but the problem is, some threads uses too much memory and blocks the other requests.
Is there any way to limit a thread or the application memory usage.
Thanks
There is no any mechanisms to restrict memory usage on dedicated threads. It's obvious that there are some architectural and\or coding bugs in your program.
You cannot define memory limits for "per-thread" .The memory is allocated from shared pool.Instead one option would be able to make a queue,then have fixed number of threads(1,2,3,4 etc).
This way if request is made it'll handle them but 4 at a time(or however many you want).In this way you can prevent the memory.
I heard many times that once C# managed program request more memory from OS, it doesn't free it back, unless system is out of memory. Eg. when object is collected, it gets deleted, and memory that was occupied by the object is free to reuse by another managed object, but memory itself is not returned to operating system (for example, mono on unix wouldn't call brk / sbrk to decrease the amount of virtual memory available to the process back to what it was before its allocation).
I don't know if this really happens or not, but I can see that my c# applications, running on linux, use small amount of memory on beginning, then when I do something memory expensive, it allocates more of it, but later on when all objects get deleted (I can verify that by putting debug message to destructors), the memory is not free'd. On other hand no more memory is allocated when I run that memory expensive operation again. The program just keep on eating the same amount of memory until it is terminated.
Maybe it is just my misunderstanding of how GC in .net works, but if it really does work like this, why is that? What is a benefit of keeping the allocated memory for later, instead of returning it back to the system? How can it even know if system need it back or not? What about other application that would crash or couldn't start because of OOM caused by this effect?
I know that people will probably answer something like "GC manages memory better than you ever could, just don't care about it" or "GC knows what it does best" or "it doesn't matter at all, it's just virtual memory" but it does matter, on my 2gb laptop I am running OOM (and kernel OOM killer gets started because of that) very often when I am running any C# applications after some time precisely because of this irresponsible memory management.
Note: I was testing this all on mono in linux because I really have hard times understanding how windows manage memory, so debugging on linux is much easier for me, also linux memory management is open source code, memory management of windows kernel / .Net is rather mystery for me
The memory manager works this way because there is no benefit of having a lot of unused system memory when you don't need it.
If the memory manager would always try to have as little memory allocated as possible, that would mean that it would do a lot of work for no reason. It would only slow the application down, and the only benefit would be more free memory that no application is using.
Whenever the system needs more memory, it will tell the running applications to return as much as possible. The same signal is also sent to an application when you minimise it.
If this doesn't work the same with Mono in Linux, then that is a problem with that specific implementation.
Generally, if an app needs memory once, it will need it again. Releasing memory back to the OS only to request it back again is overhead, and if nothing else wants the memory: why bother?. It is trying to optimize for the very likely scenario of wanting it again. Additionally, releasing it back requires entire / contiguous blocks that can be handed back, which has very specific impact on things like compaction: it isn't quite as simple as "hey, I'm not using most of this : have it back" - it needs to figure out what blocks can be released, presumably after a full collect and compact (relocate objects etc) cycle.
We have a 64bit C#/.Net3.0 application that runs on a 64bit Windows server. From time to time the app can use large amount of memory which is available. In some instances the application stops allocating additional memory and slows down significantly (500+ times slower).When I check the memory from the task manager the amount of the memory used barely changes. The application keeps on running very slowly and never gives an out of memory exception.
Any ideas? Let me know if more data is needed.
You might try enabling server mode for the Garbage Collector. By default, all .NET apps run in Workstation Mode, where the GC tries to do its sweeps while keeping the application running. If you turn on server mode, it temporarily stops the application so that it can free up memory (much) faster, and it also uses different heaps for each processor/core.
Most server apps will see a performance improvement using the GC server mode, especially if they allocate a lot of memory. The downside is that your app will basically stall when it starts to run out of memory (until the GC is finished).
* To enable this mode, insert the following into your app.config or web.config:
<configuration>
<runtime>
<gcServer enabled="true"/>
</runtime>
</configuration>
The moment you are hitting the physical memory limit, the OS will start paging (that is, write memory to disk). This will indeed cause the kind of slowdown you are seeing.
Solutions?
Add more memory - this will only help until you hit the new memory limit
Rewrite your app to use less memory
Figure out if you have a memory leak and fix it
If memory is not the issue, perhaps your application is hitting CPU very hard? Do you see the CPU hitting close to 100%? If so, check for large collections that are being iterated over and over.
As with 32-bit Windows operating systems, there is a 2GB limit on the size of an object you can create while running a 64-bit managed application on a 64-bit Windows operating system.
Investigating Memory Issues (MSDN article)
There is an awful lot of good stuff mentioned in the other answers. However, I'm going to chip in my two pence (or cents - depending on where you're from!) anyway.
Assuming that this is indeed a 64-bit process as you have stated, here's a few avenues of investigation...
Which memory usage are you checking? Mem Usage or VMem Size? VMem size is the one that actually matters, since that applies to both paged and non-paged memory. If the two numbers are far out of whack, then the memory usage is indeed the cause of the slow-down.
What's the actual memory usage across the whole server when things start to slow down? Does the slow down also apply to other apps? If so, then you may have a kernel memory issue - which can be due to huge amounts of disk accessing and low-level resource usage (for example, create 20000 mutexes, or load a few thousand bitmaps via code that uses Win32 HBitmaps). You can get some indication of this on the Task Manager (although Windows 2003's version is more informative directly on this than 2008's).
When you say that the app gets significantly slower, how do you know? Are you using vast dictionaries or lists? Could it not just be that the internal data structures are getting so big so as to complicate the work any internal algorithms are performing? When you get to huge numbers some algorithms can start to become slower by orders of magnitude.
What's the CPU load of the application when it's running at full-pelt? Is actually the same as when the slow-down occurs? If the CPU usage decreases as the memory usage goes up, then that means that whatever it's doing is taking the OS longer to fulfill, meaning that it's probably putting too much load on the OS. If there's no difference in CPU load, then my guess is it's internal data structures getting so big as to slow down your algos.
I would certainly be looking at running a Perfmon on the application - starting off with some .Net and native memory counters, Cache hits and misses, and Disk Queue length. Run it over the course of the application from startup to when it starts to run like an asthmatic tortoise, and you might just get a clue from that as well.
Having skimmed through the other answers, I'd say there's a lot of good ideas. Here's one I didn't see:
Get a memory profiler, such as SciTech's MemProfiler. It will tell you what's being allocated, by what, and it will show you the whole slice n dice.
It also has video tutorials in case you don't know how to use it. In my case, I discovered I had IDisposable instances that I wasn't Using(...)
is there anyway i can have my application tell how much memory the user has and if the application is getting close to taking up a high percentage of that.
also, how do you know how much memory the machine gives to OS, video cards, etc . .
for example, if you have 4gb of memory, how much actual memory is given to applications, can you configure this.
is there anyway i can have my application tell how much memory the user has and if the application is getting close to taking up a high percentage of that.
Yes, it's possible (see some of the other answers), but it's going to be very unlikely that your application really needs to care. What is it that you're doing where you think you need to be this sensitive to memory pressure?
also, how do you know how much memory the machine gives to OS, video cards, etc . .
Again, this should be possible using WMI calls, but the bigger question is why do you need to do this?
for example, if you have 4gb of memory, how much actual memory is given to applications, can you configure this.
No, this isn't a configurable value. When a .NET application starts up the operating system allocates a block of memory for it to use. This is handled by the OS and there is no way to configure the algorithms used to determine the amount of memory to allocate. Likewise, there is no way to configure how much of that memory the .NET runtime uses for the managed heap, stack, large object heap, etc.
I think I read the question a little differently, so hopefully this response isn't too off topic!
You can get a good overview of how much memory your application is consuming by using Windows Task Manager, or even better, Sysinternals Process Monitor. This is a quick way to review your processes at their peaks to see how they are behaving.
Out of the box, an x86 process will only be able to address 2GB of RAM. This means any single process on your machine can only consume up to 2GB. In reality, your likely to be able to consume only 1.5-1.8 before getting out of memory exceptions.
How much RAM your copy of Windows can actually address will depend on the Windows version and cpu architecture.
Using your example of 4GB RAM, the OS is going to give your applications up to 2GB of RAM to play in (which all processes share) and it will reserve 2GB for itself.
Depending on the operating system your running, you can tweak this, using the /3GB switch in the boot.ini, will adjust that ratio to 3GB for applications and 1GB for the OS. This has some impact to the OS, so I'd review that impact first and see if you can live with tradeoff (YMMV).
For a single application to be able to address greater than /3GB, your going to need to set a particular bit in the PE image header. This question/answer has good info on this subject already.
The game changes under x64 architecture. :)
Some good reference information:
Memory Limits for Windows Releases
Virtual Address Space
I think you can use WMI to get all that information
If you don't wish to use WMI, you could use GlobalMemoryStatusEx():
Function Call:
http://www.pinvoke.net/default.aspx/kernel32/GlobalMemoryStatusEx.html
Return Data:
http://www.pinvoke.net/default.aspx/Structures/MEMORYSTATUSEX.html
MemoryLoad will give you a number between 0 and 100 that represents the ~ percentage of physical memory in use and TotalPhys will tell you total total amount of physical memory in bytes.
Memory is tricky because usable memory is a blend of physical (ram) and virtual (page file) types. The specific blend, and what goes where, is determined by the operating system. Luckily, this is somewhat configurable as Windows allows you to stipulate how much virtual memory to use, if any.
Take note that not all of the memory in 32-bit Windows (XP & Vista) is available for use. Windows may report up to 4GB installed but only 3.1-3.2GB is available for actual use by the operating system and applications. This has to do with legacy addressing issues IIRC.
Good Luck
I am having a similar issue to this person. The primary difference being the application is NOT meant for a developer environment, and therefore I need to know how to optimize the space used by Sql Server (possibly per machine based on specs).
I was intrigued by Ricardo C's answer, particularly the following:
Extracted fromt he SQL Server
documentation:
Maximum server memory (in MB)
Specifies the maximum amount of memory SQL Server can allocate when it
starts and while it runs. This
configuration option can be set to a
specific value if you know there are
multiple applications running at the
same time as SQL Server and you want
to guarantee that these applications
have sufficient memory to run. If
these other applications, such as Web
or e-mail servers, request memory only
as needed, then do not set the option,
because SQL Server will release memory
to them as needed. However,
applications often use whatever memory
is available when they start and do
not request more if needed. If an
application that behaves in this
manner runs on the same computer at
the same time as SQL Server, set the
option to a value that guarantees that
the memory required by the application
is not allocated by SQL Server.
My question is: how does an application request memory from the OS when it needs it? Is this something built into compilation or something managed by the developer? The two primary apps running on this machine are Sql Server and the (fairly heavyweight) C# application I'm developing, and I'm almost certain we didn't specifically do anything in the realm of asking the OS for memory. Is there a correct/necessary way to do this?
Some applications allocate a lot of memory at startup, and then run their own memory management system on it.
This can be good for applications that have particular allocation patterns, and that feel they can do a better job than the more generic memory manager provided by the runtime system.
Many games do this, since they often have a very good idea of how their memory usage pattern is going to look, and often are heavily optimized. The default/system allocator is general-purpose and not always fast enough. Doom did this, and is fairly well-known for it and of course its code is available and widely discussed.
In "managed" languages like C# I think this is very rare, and nothing you need to worry about.
Each time you create a new object, you are asking the .NET garbage collector to give you memory. If the GC has insufficient memory on the managed heap then it will ask the OS for more.
As the other question says, although SQL server is meant to give the memory back it doesn't seem to do it very well. There is not going to be any hard and fast rules here, you will have to guess at some setting for SQL server and then test the performance. If you post some information about the server, database size, how much memory your application seems to require then I am sure people wil be happy to give you some suggestions for a starting configuration.
One warning though, I think changing its memory limits requires a service re-start.
It will depend on a few things - in particular the Operating System, and the language used.
For instance, under MacOS Classic, it was impossible to have more memory allocated after startup - we used to have to go and modify how much memory was allocated using the Finder, and then restart the application. Those were the bad old days.
Modern operating systems will allow for running processes to request more memory - for instance in C, you can use alloc(), malloc(), realloc() or similar to request chunks of memory. In dynamic languages, you just create objects or variables, and more memory is allocated.
In java, there is a limit as to how much memory the JVM has access to - and this can be changed by only restarting the JVM, and passing some arguments to it (sounds like the bad old days, doesn't it?).
In Objective-C, in addition to the malloc() family of functions, you can also create objects on the heap, using
[object alloc];
which is more often seen as
[[object alloc] init];
Note that this is slightly different to creating objects on the stack - if you are serious about programming learning the difference between these two might be useful, too :)
In summary - the programmer needs to ask the OS for more memory. This can be implicit (in dynamic languages, by creating objects, or by creating objects on the heap) or explicitly, such as in C by using alloc()/malloc()/etc.