Web application use too much memory - c#

On my host company, i have 2, more or less identical, .net applications. This two applications use the same pool, which has a max memory on 300 mb.
I can refresh the startpage for one of the application about 10 times, then i get a out of memory exception and the pool is crashed.
In my application i print out this memory values:
PrivateMemorySize64: 197 804.00 kb (193.00 mb)
PeakPagedMemorySize64: 247 220.00 kb (241.00 mb)
VirtualMemorySize64: 27 327 128.00 kb (26 686.00 mb)
PagedMemorySize64: 197 804.00 kb (193.00 mb)
PagedSystemMemorySize64: 415.00 kb (0.00 mb)
PeakWorkingSet64: 109 196.00 kb (106.00 mb)
WorkingSet64: 61 196.00 kb (59.00 mb)
GC.GetTotalMemory(true): 2 960.00 kb (2.00 mb)
GC.GetTotalMemory(false): 2 968.00 kb (2.00 mb)
I have read, and read and read an seen videos about memory profiling, but i can't find any problem when i do the profiling of my application.
I use ANTS Memory profiler 8 and get this result when i refresh the startpage one time after the build:
When i look at the Summary, .NET is using 41.65 MB of 135.8 MB total private bytes allocated for the application.
This values gets bigger and bigger for each refresh. Is that normal? When i refresh 8 times i get this:
.NET is using 56.11 MB of 153 MB total private bytes allocated for the application.
Where should i start? What could be the problem that use so much memory? Is 300 mb to low for memory?

This is undoubtedly due to a memory leak in your code, likely in the form of not disposing/closing connections to something like a queue or database. Profiling aside, review your code and ensure that you're closing/disposing all appropriate resources: your problem should then relieve itself.

There was some db connections that not was disposed. Then i have a class which removes Etags, like this:
public class CustomHeaderModule : IHttpModule
{
public void Dispose() { }
public void Init(HttpApplication context)
{
context.PreSendRequestHeaders += OnPreSendRequestHeaders;
}
void OnPreSendRequestHeaders(object sender, EventArgs e)
{
HttpContext.Current.Response.Headers.Remove("ETag");
}
}'
Must i remove the new event i add in the Init function? Or will the GC fix that?
And i have a lot of this:
Task.Factory.StartNew(() =>
{
Add(...);
});
But i don't dispose them in my code. Will the GC fix that or should i do on a other way?

Related

.Net app using 1.3GB on system with 16GB RAM throws OutOfMemoryException

I have a process dump from a Windows 10 64-bit .Net Winforms application that suffered from a System.OutOfMemoryException. The dump file is 1.3GB. A managed profiler (dotMemory) says 220MB of heap is allocated (of which 108MB is used).
The app is compiled as AnyCPU, prefer 32-bit is off. It also contains CLI/C++ projects that target x64, so it just won't run in a 32-bit environment. The app happily uses more than 1.3GB in other circumstances.
It is running on a system with 16GB of RAM, so why does it go out of memory?
The exception stack trace:
System.OutOfMemoryException: Out of memory.
at System.Drawing.Graphics.FromHdcInternal(IntPtr hdc)
at System.Drawing.Graphics.FromHdc(IntPtr hdc)
at DevExpress.XtraBars.Docking2010.DocumentsHost.DoPaint(Message& m)
at DevExpress.XtraBars.Docking2010.DocumentsHost.WndProc(Message& m)
at System.Windows.Forms.NativeWindow.Callback(IntPtr hWnd, Int32 msg, IntPtr wparam, IntPtr lparam)
Heap fragmentation could be a thing. This is the report from dotMemory (managed mem), nothing to worry about as far as I can see:
WinDbg gives me this for '!address -summary'. Although a lot of , the majority is only 'reserve' and not 'commit'.
0:000> !address -summary
--- Usage Summary ---------------- RgnCount ----------- Total Size -------- %ofBusy %ofTotal
Free 405 7ffe`8db96000 ( 127.994 TB) 100.00%
<unknown> 1515 1`3f3b3000 ( 4.988 GB) 86.22% 0.00%
Image 2261 0`25f26000 ( 607.148 MB) 10.25% 0.00%
Heap 120 0`08324000 ( 131.141 MB) 2.21% 0.00%
Stack 234 0`04bc0000 ( 75.750 MB) 1.28% 0.00%
Other 39 0`00200000 ( 2.000 MB) 0.03% 0.00%
TEB 78 0`0009c000 ( 624.000 kB) 0.01% 0.00%
PEB 1 0`00001000 ( 4.000 kB) 0.00% 0.00%
--- Type Summary (for busy) ------ RgnCount ----------- Total Size -------- %ofBusy %ofTotal
MEM_PRIVATE 1882 1`452fe000 ( 5.081 GB) 87.82% 0.00%
MEM_IMAGE 2261 0`25f26000 ( 607.148 MB) 10.25% 0.00%
MEM_MAPPED 105 0`07236000 ( 114.211 MB) 1.93% 0.00%
--- State Summary ---------------- RgnCount ----------- Total Size -------- %ofBusy %ofTotal
MEM_FREE 405 7ffe`8db96000 ( 127.994 TB) 100.00%
MEM_RESERVE 681 1`22426000 ( 4.535 GB) 78.39% 0.00%
MEM_COMMIT 3567 0`50034000 ( 1.250 GB) 21.61% 0.00%
--- Protect Summary (for commit) - RgnCount ----------- Total Size -------- %ofBusy %ofTotal
PAGE_READWRITE 1835 0`23d12000 ( 573.070 MB) 9.67% 0.00%
PAGE_EXECUTE_READ 195 0`15090000 ( 336.563 MB) 5.68% 0.00%
PAGE_READONLY 854 0`13fde000 ( 319.867 MB) 5.40% 0.00%
PAGE_WRITECOPY 484 0`01633000 ( 22.199 MB) 0.37% 0.00%
PAGE_EXECUTE_READWRITE 92 0`012db000 ( 18.855 MB) 0.32% 0.00%
PAGE_READWRITE|PAGE_WRITECOMBINE 5 0`00830000 ( 8.188 MB) 0.14% 0.00%
PAGE_READWRITE|PAGE_GUARD 78 0`0015e000 ( 1.367 MB) 0.02% 0.00%
PAGE_NOACCESS 24 0`00018000 ( 96.000 kB) 0.00% 0.00%
--- Largest Region by Usage ----------- Base Address -------- Region Size ----------
Free 213`79810000 7de1`2f130000 ( 125.880 TB)
<unknown> 7ff4`a8af0000 1`00020000 ( 4.000 GB)
Image 7ffd`06181000 0`03b43000 ( 59.262 MB)
Heap 213`6b332000 0`0095d000 ( 9.363 MB)
Stack 52`c8600000 0`000fb000 (1004.000 kB)
Other 213`311e0000 0`00181000 ( 1.504 MB)
TEB 52`c8000000 0`00002000 ( 8.000 kB)
PEB 52`c8158000 0`00001000 ( 4.000 kB)
Excessive use of GDI handles is a common cause for trouble, but the application has a watchdog for that: it fails if GDI/user handles reach 8000, well below the 10000 OS limit.
I also found this bug report but it was fixed ages ago.
In this post a typical cause is having Bitmap instances that are not disposed, so their unmanaged memory piles up. But that should show up in the WinDbg output then. Anyway, I've compared the typical use of the application with the faulty behavior, and they both have around 1000 Bitmap instances around (many icons etc). It could still be caused by a few very large undisposed Bitmap's, but that seems unlikely. And it still doesn't explain why it goes out of memory at 1.3GB on a system with 16GB RAM.
What else can be the cause? Memory fragmentation of the unmanaged memory? What can I do to further investigate this issue?
RAM is not interesting, generally. RAM refers to physical memory and your application does not work with physical memory, it works with virtual memory. You typically have more virtual memory than physical memory, because the page file counts as virtual memory, too. As you can see, WinDbg reports 127 TB of free memory.
However, this does not mean that you can actually get that much memory. You would need a 127 TB of free storage on your hard disk, which is unlikely. The typical memory you can use is somewhere around
your RAM size
- kernel stuff
- RAM in use by other applications
+ size of the page file
- page file use by other applications
Calculating that amount after the fact is quite difficult, so we can't really know how much virtual memory was available at the time of the OOM.
And it still doesn't explain why it goes out of memory at 1.3GB on a system with 16GB RAM.
You can OOM with even less memory used. Try
#include <exception>
#include <limits>
int main()
{
auto const p = malloc(std::numeric_limits<size_t>::max());
if (!p) throw std::exception();
}
Memory fragmentation of the unmanaged memory?
No. Largest Region by Usage says there is a contiguous block of 125 TB. A new block would be requested to fulfill the request. You run into fragmentation issues when the largest available block is smaller than what was requested.
it fails if GDI/user handles reach 8000, well below the 10000 OS limit.
8000 is pretty much, IMHO. Please note that there is not only a limit of 10000 per process but also 65536 per session.
What can you do?
Make sure you're not having a OOM due to GDI handles (thanks to  
Panagiotis Kanavos for the reference)
Follow the instructions of How to use WinDBG to track down .net out of memory exceptions?
Make Proper use of the IDisposable interface. The top answer has an example explicitly for Bitmaps. I'd even consider it ok to call GC.Collect() to indicate to the GC that it might be able to free a lot of memory now.
Check the size of your page file (control panel) and the free disk space for that page file to expand.
Check if there are Updates to your 12 year old DXperience component and you use the latest version of it.

UWP device total memory

How do you determine a devices total memory? I would like to use a sequential program flow on low memory devices and a more asynchronous flow on higher memory devices.
Example: On a device with 1GB of memory my program works but on 512MB device my program hits an OutOfMemoryException as it is caching images from multiple sites asynchronously.
The MemoryManager class has some static properties to get the current usage and limit for the application.
// Gets the app's current memory usage.
MemoryManager.AppMemoryUsage
// Gets the app's memory usage level.
MemoryManager.AppMemoryUsageLevel
// Gets the app's memory usage limit.
MemoryManager.AppMemoryUsageLimit
You can react to the limit changing using the MemoryManager.AppMemoryUsageLimitChanging event
private void OnAppMemoryUsageLimitChanging(
object sender, AppMemoryUsageLimitChangingEventArgs e)
{
Debug.WriteLine(String.Format("AppMemoryUsageLimitChanging: old={0} MB, new={1} MB",
(double)e.OldLimit / 1024 / 1024,
(double)e.NewLimit / 1024 / 1024));
}
You can use the application's memory limit to decide how best to manage your memory allocation.

Managed memory use of MemoryMappedFile.CreateViewStream()

Will MemoryMappedFile.CreateViewStream(0, len) allocate a managed block of memory of size len, or will it allocate smaller buffer that acts as a sliding window over the unmanaged data?
I wonder because I aim to replace an intermediate buffer for deserialization that is a MemoryStream today, which is giving me trouble for large datasets, both because of the size of the buffer and because of LOH fragmentation.
If the viewstream's internal buffer becomes the same size then making this switch wouldn't make sense.
Edit:
In a quick test I found these numbers when comparing the MemoryStream to the MemoryMapped file. Readings from GC.GetTotalMemory(true)/1024 and Process.GetCurrentProcess.VirtualMemorySize64/1024
Allocate an 1GB memory stream:
Managed Virtual
Initial: 81 kB 190 896 kB
After alloc: 1 024 084 kB 1 244 852 kB
As expected, a gig of both managed and virtual memory.
Now, for the MemoryMappedFile:
Managed Virtual
Initial: 81 kB 189 616 kB
MMF allocated: 84 kB 189 684 kB
1GB viewstream allocd: 84 kB 1 213 368 kB
Viewstream disposed: 84 kB 190 964 kB
So using a not very scientific test, my assumption is that the ViewStream uses only unmanaged data. Correct?
A MMF like that doesn't solve your problem. A program bombs on OOM because there isn't hole in the virtual memory space big enough to fit the allocation. You are still consuming VM address space with an MMF, as you can tell.
Using a small sliding view would be a workaround, but that isn't any different from writing to a file. Which is what an MMF does when you remap the view, it needs to flush the dirty pages to disk. Simply streaming to a FileStream is the proper workaround. That still uses RAM, the file system cache helps make writing fast. If you've got a gigabyte of RAM available, not hard to come by these days, then writing to a FileStream is just a memory-to-memory copy. Very fast, 5 gigabytes/sec and up. The file gets written in a lazy fashion in the background.
Trying too hard to keep data in memory is unproductive in Windows. Private data in memory is backed by the paging file and will be written to that file when Windows needs the RAM for other processes. And read back when you access it again. That's slow, the more memory you use, the worse it gets. Like any demand-paged virtual memory operating system, the distinction between disk and memory is a small one.
given the example on http://msdn.microsoft.com/en-us/library/system.io.memorymappedfiles.memorymappedfile.aspx it seems to me that you get a sliding window, at least that is what i interpret when reading the example.
Here the example for convenience:
using System;
using System.IO;
using System.IO.MemoryMappedFiles;
using System.Runtime.InteropServices;
class Program
{
static void Main(string[] args)
{
long offset = 0x10000000; // 256 megabytes
long length = 0x20000000; // 512 megabytes
// Create the memory-mapped file.
using (var mmf = MemoryMappedFile.CreateFromFile(#"c:\ExtremelyLargeImage.data", FileMode.Open,"ImgA"))
{
// Create a random access view, from the 256th megabyte (the offset)
// to the 768th megabyte (the offset plus length).
using (var accessor = mmf.CreateViewAccessor(offset, length))
{
int colorSize = Marshal.SizeOf(typeof(MyColor));
MyColor color;
// Make changes to the view.
for (long i = 0; i < length; i += colorSize)
{
accessor.Read(i, out color);
color.Brighten(10);
accessor.Write(i, ref color);
}
}
}
}
}
public struct MyColor
{
public short Red;
public short Green;
public short Blue;
public short Alpha;
// Make the view brigher.
public void Brighten(short value)
{
Red = (short)Math.Min(short.MaxValue, (int)Red + value);
Green = (short)Math.Min(short.MaxValue, (int)Green + value);
Blue = (short)Math.Min(short.MaxValue, (int)Blue + value);
Alpha = (short)Math.Min(short.MaxValue, (int)Alpha + value);
}
}

How to debug the potential memory leak?

I programed the windows service to do a routine work.
I InstallUtil it to windows service and it'will wake up and do something and then thread.sleep(5min)
The code is simple, but I've noticed a potential memory leak. I traced it using DOS tasklist and drew a chart:
Can I say that it's pretty clear there was memory leak, although so little.
My code is like below, Please help me to find the potential leak. Thanks.
public partial class AutoReport : ServiceBase
{
int Time = Convert.ToInt32(AppSettings["Time"].ToString());
private Utilities.RequestHelper requestHelper = new RequestHelper();
public AutoReport()
{
InitializeComponent();
}
protected override void OnStart(string[] args)
{
Thread thread = new Thread(new ParameterizedThreadStart(DoWork));
thread.Start();
}
protected override void OnStop()
{
}
public void DoWork(object data)
{
while (true)
{
string jsonOutStr = requestHelper.PostDataToUrl("{\"KeyString\":\"somestring\"}", "http://myurl.ashx");
Thread.Sleep(Time);
}
}
}
Edit: After using WinDbg #Russell suggested. What should I do to these classes?
MT Count TotalSize ClassName
79330b24 1529 123096 System.String
793042f4 471 41952 System.Object[]
79332b54 337 8088 System.Collections.ArrayList
79333594 211 70600 System.Byte[]
79331ca4 199 3980 System.RuntimeType
7a5e9ea4 159 2544 System.Collections.Specialized.NameObjectCollectionBase+NameObjectEntry
79333274 143 30888 System.Collections.Hashtable+bucket[]
79333178 142 7952 System.Collections.Hashtable
79331754 121 57208 System.Char[]
7a5d8120 100 4000 System.Net.LazyAsyncResult
00d522e4 95 5320 System.Configuration.FactoryRecord
00d54d60 76 3952 System.Configuration.ConfigurationProperty
7a5df92c 74 2664 System.Net.CoreResponseData
7a5d8060 74 5032 System.Net.WebHeaderCollection
79332d70 73 876 System.Int32
79330c60 73 1460 System.Text.StringBuilder
79332e4c 72 2016 System.Collections.ArrayList+ArrayListEnumeratorSimple
7.93E+09 69 1380 Microsoft.Win32.SafeHandles.SafeTokenHandle
7a5e0d0c 53 1060 System.Net.HeaderInfo
7a5e4444 53 2120 System.Net.TimerThread+TimerNode
79330740 52 624 System.Object
7a5df1d0 50 2000 System.Net.AuthenticationState
7a5e031c 50 5800 System.Net.ConnectStream
7aa46f78 49 588 System.Net.ConnectStreamContext
793180f4 48 960 System.IntPtr[]
This is how I'd go about finding the memory leak:
1) Download WinDbg if you don't already have it. It's a really powerful (although difficult to use as it's complicated) debugger.
2) Run WinDbg and attach it to your process by pressing F6 and selecting your exe.
3) When it has attached type these commands: (followed by enter)
//this will load the managed extensions
.loadby sos clr
//this will dump the details of all your objects on the heap
!dumpheap -stat
//this will start the service again
g
Now wait a few minutes and type Ctrl+Break to break back into the service. Run the !Dumpheap -stat command again to find out what is on the heap now. If you have a memory leak (in managed code) then you will see one or more of your classes keep getting added to the heap over time. You now know what is being kept in memory so you know where to look for the problem in your code. You can work out what is holding references to the objects being leaked from within WinDbg if you like but it's a complicated process. If you decide to use WinDbg then you probably want to start by reading Tess's blog and doing the labs.
you will need to use an Allocation Profiler to Detect Memory Leaks, there are some good profiler for that, i can recommend the AQTime (see this video)
And read: How to Locate Memory Leaks Using the Allocation Profiler
Maby this article can be helpfull too
To find a memory leak, you should look at the performance counters on a long period of time. If you see the number of handles or total bytes in all heap growing without never decreasing, you have a real memory leak. Then, you can use for example profiling tools in visual studio to track the leak. There is also a tool from redgate which works quite well.
Difficult to say, but I suspect this line in your while loop within DoWork:
JsonIn jsonIn = new JsonIn { KeyString = "secretekeystring", };
Although jsonin only has scope within the while block, I would hazard that the Garbage Collector is taking its time to remove unwanted instances.

Calculate upload transfer speed problem

I have implemented a file transfer rate calculator to display kB/sec for an upload process occuring in my app, however with the following code it seems I am getting 'bursts' in my KB/s readings just after the file commences to upload.
This is the portion of my stream code, this streams a file in 1024 chunks to a server using httpWebRequest:
using (Stream httpWebRequestStream = httpWebRequest.GetRequestStream())
{
if (request.DataStream != null)
{
byte[] buffer = new byte[1024];
int bytesRead = 0;
Debug.WriteLine("File Start");
var duration = new Stopwatch();
duration.Start();
while (true)
{
bytesRead = request.DataStream.Read(buffer, 0, buffer.Length);
if (bytesRead == 0)
break;
httpWebRequestStream.Write(buffer, 0, bytesRead);
totalBytes += bytesRead;
double bytesPerSecond = 0;
if (duration.Elapsed.TotalSeconds > 0)
bytesPerSecond = (totalBytes / duration.Elapsed.TotalSeconds);
Debug.WriteLine(((long)bytesPerSecond).FormatAsFileSize());
}
duration.Stop();
Debug.WriteLine("File End");
request.DataStream.Close();
}
}
Now an output log of the upload process and associated kB/sec readings are as follows:
(You will note a new file starts and ends with 'File Start' and 'File End')
File Start
5.19 MB
7.89 MB
9.35 MB
11.12 MB
12.2 MB
13.13 MB
13.84 MB
14.42 MB
41.97 kB
37.44 kB
41.17 kB
37.68 kB
40.81 kB
40.21 kB
33.8 kB
34.68 kB
33.34 kB
35.3 kB
33.92 kB
35.7 kB
34.36 kB
35.99 kB
34.7 kB
34.85 kB
File End
File Start
11.32 MB
14.7 MB
15.98 MB
17.82 MB
18.02 MB
18.88 MB
18.93 MB
19.44 MB
40.76 kB
36.53 kB
40.17 kB
36.99 kB
40.07 kB
37.27 kB
39.92 kB
37.44 kB
39.77 kB
36.49 kB
34.81 kB
36.63 kB
35.15 kB
36.82 kB
35.51 kB
37.04 kB
35.71 kB
37.13 kB
34.66 kB
33.6 kB
34.8 kB
33.96 kB
35.09 kB
34.1 kB
35.17 kB
34.34 kB
35.35 kB
34.28 kB
File End
My problem is as you will notice, the 'burst' I am talking about starts at the beginning of every new file, peaking in MB's and then evens out properly. is this normal for an upload to burst like this? My upload speeds typically won't go higher than 40k/sec here so it can't be right.
This is a real issue, when I take an average of the last 5 - 10 seconds for on-screen display, it really throws things out producing a result around ~3MB/sec!
Any ideas if I am approaching this problem the best way? and what I should do? :S
Graham
Also: Why can't I do 'bytesPerSecond = (bytesRead / duration.Elapsed.TotalSeconds)' and move duration.Start & duration.Stop into the while loop and receive accurate results? I would have thought this would be more accurate? Each speed reads as 900 bytes/sec, 800 bytes/sec etc.
The way i do this is:
Save up all bytes transfered in a long.
Then every 1 second i check how much has been transfered. So i basicly only trigger the code to save speed once pr second. Your while loop is going to loop maaaaaaaaaaaany times in one second on a fast network.
Depending on the speed of your network you may need to check the bytes transfered in a seperate thread or function. I prefere doing this with a Timer so i can easly update UI
EDIT:
From your looking at your code, im guessing what your doing wrong is that you dont take into account that one loop in the while(true) is not 1 second
EDIT2:
Another advatage with only doing the speed check once pr second is that things will go much quicker. In cases like this updating the UI can be the slowest thing your are doing, so if you try to update the UI every loop, thats most likely your slowest point and is going to produce unresponsive UI.
Your also correct that you should avarage out the values, so you dont get the microsoft minutes bugs. I normaly do this in the Timer function running by doing something like this:
//Global variables
long gTotalDownloadedBytes;
long gCurrentDownloaded; // Where you add up from the download/upload untill the speedcheck is done.
int gTotalDownloadSpeedChecks;
//Inside function that does speedcheck
gTotalDownloadedBytes += gCurrentDownloaded;
gTotalDownloadSpeedChecks++;
long AvgDwnSpeed = gTotalDownloadedBytes / gTotalDownloadSpeedChecks; // Assumes 1 speedcheck pr second.
There's many layers of software and hardware between you and the system you're sending to, and several of those layers have a certain amount of buffer space available.
When you first start sending, you can pump out data quite quickly until you fill those buffers - it's not actually getting all the way to the other end that fast, though! After you fill up the send buffers, you're limited to putting more data into them at the same rate it's draining out, so the rate you see will drop to the underlying networking sending rate.
All, I think I have fixed my issue by adjusting the 5 - 10 averging variable to wait one second to account for the burst, not the best, but will allow internet to sort itself out and allow me to capture a smooth transfer.
It appears from my network traffic it down right is bursting so there is nothing in code I could do differently to stop this.
Please will still be interested in more answers before I hesitantly accept my own.

Categories

Resources