In my web application i use LeadTools to Create Multi Page Tiff file from stream. Below is a code that shows how I use leadtools.
using (RasterCodecs codecs = new RasterCodecs())
{
RasterImage ImageToAppened = default(RasterImage);
RasterImage imageSrc = default(RasterImage);
codecs.Options.Load.AllPages = true;
ImageToAppened = codecs.Load(fullInputPath, 1);
FileInfo fileInfooutputTiff = new FileInfo(fullOutputPath);
if (fileInfooutputTiff.Exists)
{
imageSrc = codecs.Load(fullOutputPath);
imageSrc.AddPage(ImageToAppened);
codecs.Save(imageSrc, fullOutputPath, RasterImageFormat.Ccitt, 1);
}
else
{
codecs.Save(ImageToAppened, fullOutputPath, RasterImageFormat.Ccitt, 1);
}
}
Above code works properly and i get many request for my web application at around 2000 requests. In some cases i get below error . But later on again it works properly for other request.
You have exceeded the amount of memory allowed for RasterImage allocations.See RasterDefaults::MemoryThreshold::MaximumGlobalRasterImageMemory.
Is that memory issue is for single request or for all the objects during the application started(global object)?
So what is the solution for above error?
The error you report references the MaximumGlobalRasterImageMemory:
You have exceeded the amount of memory allowed for RasterImage allocations.See RasterDefaults::MemoryThreshold::MaximumGlobalRasterImageMemory.
In the documentation it states:
Gets or sets a value that specifies the maximum size allowed for all RasterImage object allocations.
When allocating a new RasterImage object, if the new allocation causes the total memory used by all allocated RasterImage objects to exceed the value of MaximumGlobalRasterImageMemory, then the allocation will throw an exception.
So it looks like it's for all objects.
These are the specified default values:
On x86 systems, this property defaults to 1.5 GB.
On x64 systems, this property defaults to either 1.5 GB or 75 percent of the system's total physical RAM, whichever is larger.
I would advise that you familiarise yourself with the documentation for the SDK.
When handling files with many pages, here are a few general tips that could help with both web and desktop applications:
Avoid loading all pages and adding them to one RasterImage in memory. Instead loop through them and load them one (or a few) at a time, then append them to output file without keeping them in memory. Appending to file could get slower as the page count grows, but this help topic explains how you can speed that up.
You have "using (RasterCodecs codecs ..)" in your code, but the large memory is for the image, not the codecs object. Consider wrapping your RasterImage object in a "using" scope to speed up its disposal. In other words, go for "using (RasterImage image = ...)"
And the obvious suggestion: go for 64-bit, install as much RAM as you can and increase the value of MaximumGlobalRasterImageMemory.
Related
I'm working on building an application that displays one of three images in a list images on a picture box every 3 seconds, and the way I did this is by loading one of the images directly into the picture box using
drawing.Image = Image.FromFile(images[1]);
Then I used the Graphics class to draw the other 2 images.
void PictureBox_Paint_alarm(object sender, PaintEventArgs e)
{
if (rotationCounter > images.Count-1)
rotationCounter = 0;
if (rotationCounter != 1)
e.Graphics.DrawImage(Image.FromFile(images[rotationCounter]), new RectangleF(0, 0, drawing.Size.Width, drawing.Size.Height));
}
The rotationCounter increments by 1 every 3 seconds and the application function as intended. However, I have noticed that the program is consuming more memory as time goes by, until it reaches 5 Gigabytes then it goes back to 400 KB, the images have an average size of 450 KB.
The problem is that I'm going to be deploying this program on a system that only has 2 GB of RAM
You have an images array (or something very like it) that is currently storing file names. Instead of doing that, load the images once and have an array of those instead of file names. That way you're not constantly reloading the images from file.
That also means you don't keep creating very (resource) expensive Image objects. Those things implement IDisposable and so are meant to be disposed when no longer required. You're instead just letting them sit around until they're Garbage Collected:
until it reaches 5 Gigabytes then it goes back to 400 KB, the images have an average size of 450 KB.
Which is exactly what you're describing here.
The problem is that I'm going to be deploying this program on a system that only has 2 GB of RAM
And that wouldn't be a problem because a) programs don't allocate physical memory and b) the Garbage Collector would kick in earlier when memory pressure sets in.
You can use GC.Collect(); to use Garbage Collector.
Also if you have declared objects, you can use DeleteObject() to delete it after use.
This link might be useful:
Deleting & Releasing GDI objects
A process memory usage include (can be called as VirtualMemory):
PrivateMemmory: dedicated to a process and cannot be shared by other processes.
SharedMemory: runtime or 3rd linked library.
CommitedMemory [or PagedMemory]: mapped out to the hard disk. (ready for use)
ReservedMemory: only declared(not exist and no address).
Here is my understanding:
Virtual Memory = PrivateMemmory + SharedMemory + CommitedMemory + ReservedMemory;
WorkSet Memory = PrivateMemmory + SharedMemory + CommitedMemory;
Free Memory = 'Virtual Memory' - 'WorkSet Memory';
I calculate total usage memory of a process (not include the reserved) written with c#. the left is VMMap and the right is VS Monitor.
The process total memory size is about 5GB, and the reserved memory is about 4GB in VMMap, and VS Monitor show VirtualMemorySize64 is about 5GB, i am confused how can i get the total usage memory. there is 4GB of the reserved memory in VMMap, how can i get the reserved memory with .net Process class.
I set TotalUsageMemory value with below code, is it correct?
Int64 TotalUsageMemory = proc.WorkingSet64 + proc.PagedMemorySize64;
The numbers don't add up like that. Whether a page is in the working set or not is independent of whether it is shared or not. This again is (I believe) independent of whether it is committed or not.
The right counter to look at depends on the question you want to answer. Unfortunately, there is no counter that fully matches the intuitive notion of memory usage. Private bytes normally is what's used for that. Working set does not mean much in practice. This counter can change at any time due to OS actions. Virtual memory also is quite irrelevant from a performance standpoint.
Normally, memory usage is the memory that was incrementally consumed by starting that process. That's private bytes.
There exists no counter or computation to give you a TotalUsageMemory value.
I was running some test, to see the how my logging would perform is instead of doing File.AppendAllText I would first write to a memory stream and then copy to file. So, just to see how fast memory operation is I did this..
private void button1_Click(object sender, EventArgs e)
{
using (var memFile = new System.IO.MemoryStream())
{
using (var bw = new System.IO.BinaryWriter(memFile))
{
for (int i = 0; i < Int32.MaxValue; i++)
{
bw.Write(i.ToString() + Environment.NewLine);
}
bw.Flush();
}
memFile.CopyTo(new System.IO.FileStream(System.IO.Path.Combine("C", "memWriteWithBinaryTest.log"), System.IO.FileMode.OpenOrCreate));
}
}
When i reached 25413324 I got a Exception of type 'System.OutOfMemoryException' was thrown. even though my Process Explorer says I have about 700Mb of free ram???
Here are the screen shots (just in case)
Process Explorer
Here's the winform
EDIT : For the sake of more objects being created on heap, I rewrote the bw.write to this
bw.Write(i);
First of all, you run out of memory because you accumulate data in the MemoryStream, instead of writing it directly to the FileStream. Use the FileStream directly and you won't need much RAM at all (but you will have to keep the file open).
The amount of physical memory unused is not directly relevant to this exception, as strange as that might sound.
What matters is:
that you have a contiguous chunk of memory available in the process' virtual address space
that the system commit does not exceed the total RAM size + page file size
When you ask the Windows memory manager to allocate you some RAM, it needs to check not how much is available, but how much it has promised to make available to every other process. Such promising is done through commits. To commit some memory means that the memory manager offered you a guarantee that it will be available when you finally make use of it.
So, it can be that the physical RAM is completely used up, but your allocation request still succeeds. Why? Because there is lots of space available in the page file. When you actually start using the RAM you got through such an allocation, the memory manager will just simply page out something else. So 0 physical RAM != allocations will fail.
The opposite can happen too; an allocation can fail despite having some unused physical RAM. Your process sees memory through the so-called virtual address space. When your process reads memory at address 0x12340000, that's a virtual address. It might map to RAM at 0x78650000, or at 0x000000AB12340000 (running a 32-bit process on a 64-bit OS), it might point to something that only exists in the page file, or it might not even point at anything at all.
When you want to allocate a block of memory with contiguous addresses, it's in this virtual address space that the RAM needs to be contiguous. For a 32-bit process, you only get 2GB or 3GB of usable address space, so it's not too hard to use it up in such a way that no contiguous chunk of a sufficient size exists, despite there being both free physical RAM and enough total unused virtual address space.
This can be caused by memory fragmentation.
Large objects go onto the large object heap and they don't get moved around to make room for things. This can cause fragmentation where you have gaps in the available memory, which can cause out-of-memory when you try to allocate an object larger than any of the blocks of available memory.
See here for more details.
Any object larger than 85,000 bytes will be placed on the large object heap, except for arrays of doubles for which the threshold is just 1000 doubles (or 8000 bytes).
Also note that 32-bit .Net programs are limited to a maximum of 2GB per object and somewhat less than 4GB overall (perhaps as low as 3GB depending on the OS).
You should not be using a BinaryWriter to write text to a file. Use a TextWriter instead.
Now you are using:
for (int i = 0; i < Int32.MaxValue; i++)
This will write at least 3 bytes per write (number representation and newline). Times that by Int32.MaxValue and you need at least 6GB of memory already seeing that you are writing it to a MemoryStream.
Looking further at your code, you are going to write the MemoryStream to a file in any way. So you can simply do the following:
for (int i = 0; i < int.MaxValue; i++)
{
File.AppendAllText("filename.log", i.ToString() + Environment.Newline);
}
or write to an open TextWriter:
TextWriter writer = File.AppendText("filename.log");
for (int i = 0; i < int.MaxValue; i++)
{
writer.WriteLine(i);
}
If you want some memory buffer, which IMO is a bad idea for logging as you will lose the last bit of the writes during a crash, you can using the following the create the TextWriter:
StreamWriter(string path, bool append, Encoding encoding, int bufferSize)
and pass a 'biggish' number for bufferSize. The default is 1024.
To answer the question, you get an out of memory exception due to the MemoryStream resizing and at some point it gets to big to fit into memory (which was discussed in another answer).
Since my original question was a bit too vague, let me clarify.
My goals are:
to estimate blank disc size after selecting filesystem via IMAPI
to estimate space which my file will consume on this disc if i burn it.
What i would like to know:
Is it possible to get bytes per sector for selected file system programmatically
If not, is there default value for bytes per sector which IMAPI uses for different file systems / media types, and is it documented somewhere.
Ok, so the short answer to my question is: one can safely assume, that sector size for DVD/BD discs = 2048 bytes.
The reason, why i was getting different sizes during my debug sessions, was because of an error in code, which retrieved sectors count :)
Mentioned code block was copypasted from http://www.codeproject.com/Articles/24544/Burning-and-Erasing-CD-DVD-Blu-ray-Media-with-C-an , so just in case im posting a quick fix.
original code:
discFormatData = new MsftDiscFormat2Data();
discFormatData.Recorder = discRecorder;
IMAPI_MEDIA_PHYSICAL_TYPE mediaType = discFormatData.CurrentPhysicalMediaType;
fileSystemImage = new MsftFileSystemImage();
fileSystemImage.ChooseImageDefaultsForMediaType(mediaType);
if (!discFormatData.MediaHeuristicallyBlank)
{
fileSystemImage.MultisessionInterfaces = discFormatData.MultisessionInterfaces;
fileSystemImage.ImportFileSystem();
}
Int64 freeMediaBlocks = fileSystemImage.FreeMediaBlocks;
fixed code:
discFormatData = new MsftDiscFormat2Data { Recorder = discRecorder };
fileSystemImage = new MsftFileSystemImage();
fileSystemImage.ChooseImageDefaults(discRecorder);
if (!discFormatData.MediaHeuristicallyBlank)
{
fileSystemImage.MultisessionInterfaces = discFormatData.MultisessionInterfaces;
fileSystemImage.ImportFileSystem();
}
Int64 freeMediaBlocks = fileSystemImage.FreeMediaBlocks;
If you know free/used blocks and the total size of the storage volume (ignoring used/free space) then you can calculate the size per block and then work the rest out.
block size = total size / (blocks used + blocks free)
free space = size per block * blocks free
I'd be surprised if you found the block size was anything other than 1K though
via IMAPI - IWriteEngine2::get_BytesPerSector
http://msdn.microsoft.com/en-us/library/windows/desktop/aa832661(v=vs.85).aspx
This project uses a managed IMAPI2 wrapper to make life easier - http://www.codeproject.com/Articles/24544/Burning-and-Erasing-CD-DVD-Blu-ray-Media-with-C-an
Edit2: I just want to make sure my question is clear: Why, on each iteration of AppendToLog(), the application uses 15mb more? (the size of the original log file)
I've got a function called AppendToLog() which receives the file path of an HTML document, does some parsing and appends it to a file. It gets called this way:
this.user_email = uemail;
string wanted_user = wemail;
string[] logPaths;
logPaths = this.getLogPaths(wanted_user);
foreach (string path in logPaths)
{
this.AppendToLog(path);
}
On every iteration, the RAM usage increases by 15mb or so. This is the function: (looks long but it's simple)
public void AppendToLog(string path)
{
Encoding enc = Encoding.GetEncoding("ISO-8859-2");
StringBuilder fb = new StringBuilder();
FileStream sourcef;
string[] messages;
try
{
sourcef = new FileStream(path, FileMode.Open);
}
catch (IOException)
{
throw new IOException("The chat log is in use by another process."); ;
}
using (StreamReader sreader = new StreamReader(sourcef, enc))
{
string file_buffer;
while ((file_buffer = sreader.ReadLine()) != null)
{
fb.Append(file_buffer);
}
}
//Array of each line's content
messages = parseMessages(fb.ToString());
fb = null;
string destFileName = String.Format("{0}_log.txt",System.IO.Path.GetFileNameWithoutExtension(path));
FileStream destf = new FileStream(destFileName, FileMode.Append);
using (StreamWriter swriter = new StreamWriter(destf, enc))
{
foreach (string message in messages)
{
if (message != null)
{
swriter.WriteLine(message);
}
}
}
messages = null;
sourcef.Dispose();
destf.Dispose();
sourcef = null;
destf = null;
}
I've been days with this and I don't know what to do :(
Edit: This is ParseMessages, a function that uses HtmlAgilityPack to strip parts of an HTML log.
public string[] parseMessages(string what)
{
StringBuilder sb = new StringBuilder();
HtmlDocument doc = new HtmlDocument();
doc.LoadHtml(what);
HtmlNodeCollection messageGroups = doc.DocumentNode.SelectNodes("//body/div[#class='mplsession']");
int messageCount = doc.DocumentNode.SelectNodes("//tbody/tr").Count;
doc = null;
string[] buffer = new string[messageCount];
int i = 0;
foreach (HtmlNode sessiongroup in messageGroups)
{
HtmlNode tablegroup = sessiongroup.SelectSingleNode("table/tbody");
string sessiontime = sessiongroup.Attributes["id"].Value;
HtmlNodeCollection messages = tablegroup.SelectNodes("tr");
if (messages != null)
{
foreach (HtmlNode htmlNode in messages)
{
sb.Append(
ParseMessageDate(
sessiontime,
htmlNode.ChildNodes[0].ChildNodes[0].InnerText
)
); //Date
sb.Append(" ");
try
{
foreach (HtmlTextNode node in htmlNode.ChildNodes[0].SelectNodes("text()"))
{
sb.Append(node.Text.Trim()); //Name
}
}
catch (NullReferenceException)
{
/*
* We ignore this exception, it just means there's extra text
* and that means that it's not a normal message
* but a system message instead
* (i.e. "John logged off")
* Therefore we add the "::" mark for future organizing
*/
sb.Append("::");
}
sb.Append(" ");
string message = htmlNode.ChildNodes[1].InnerHtml;
message = message.Replace(""", "'");
message = message.Replace(" ", " ");
message = RemoveMedia(message);
sb.Append(message); //Message
buffer[i] = sb.ToString();
sb = new StringBuilder();
i++;
}
}
}
messageGroups = null;
what = null;
return buffer;
}
As many have mentioned, this is probably just an artifact of the GC not cleaning up the memory storage as fast as you are expecting it to. This is normal for managed languages, like C#, Java, etc. You really need to find out if the memory allocated to your program is free or not if you're are interested in that usage. The questions to ask related to this are:
How long is your program running? Is it a service type program that runs continuously?
Over the span of execution does it continue to allocate memory from the OS or does it reach a steady-state? (Have you run it long enough to find out?)
Your code does not look like it will have a "memory-leak". In managed languages you really don't get memory leaks like you would in C/C++ (unless you are using unsafe or external libraries that are C/C++). What happens though is that you do need to watch out for references that stay around or are hidden (like a Collection class that has been told to remove an item but does not set the element of the internal array to null). Generally, objects with references on the stack (locals and parameters) cannot 'leak' unless you store the reference of the object(s) into an object/class variables.
Some comments on your code:
You can reduce the allocation/deallocation of memory by pre-allocating the StringBuilder to at least the proper size. Since you know you will need to hold the entire file in memory, allocate it to the file size (this will actually give you a buffer that is just a little bigger than required since you are not storing new-line character sequences but the file probably has them):
FileInfo fi = new FileInfo(path);
StringBuilder fb = new StringBuilder((int) fi.Length);
You may want to ensure the file exists before getting its length, using fi to check for that. Note that I just down-cast the length to an int without error checking as your files are less than 2GB based on your question text. If that is not the case then you should verify the length before casting it, perhaps throwing an exception if the file is too big.
I would recommend removing all the variable = null statements in your code. These are not necessary since these are stack allocated variables. As well, in this context, it will not help the GC since the method will not live for a long time. So, by having them you create additional clutter in the code and it is more difficult to understand.
In your ParseMessages method, you catch a NullReferenceException and assume that is just a non-text node. This could lead to confusing problems in the future. Since this is something you expect to normally happen as a result of something that may exist in the data you should check for the condition in the code, such as:
if (node.Text != null)
sb.Append(node.Text.Trim()); //Name
Exceptions are for exceptional/unexpected conditions in the code. Assigning significant meaning to NullReferenceException more than that there was a null reference can (likely will) hide errors in other parts of that same try block now or with future changes.
There is no memory leak. If you are using Windows Task Manager to measure the memory used by your .NET application you are not getting a clear picture of what is going on, because the GC manages memory in a complex way that Task Manager doesn't reflect.
A MS engineer wrote a great article about why .NET applications that seem to be leaking memory probably aren't, and it has links to very in depth explanations of how the GC actually works. Every .NET programmer should read them.
I would look carefully at why you need to pass a string to parseMessages, ie fb.ToString().
Your code comment says that this returns an array of each lines content. However you are actually reading all lines from the log file into fb and then converting to a string.
If you are parsing large files in parseMessages() you could do this much more efficiently by passing the StringBuilder itself or the StreamReader into parseMessages(). This would enable only loading a portion of the file into memory at any time, as opposed to using ToString() which currently forces the entire logfile into memory.
You are less likely to have a true memory leak in a .NET application thanks to garbage collection. You do not look to be using any large resources such as files, so it seems even less likely that you have an actual memory leak.
It looks like you have disposed of resources ok, however the GC is probably struggling to allocate and then deallocate the large memory chunks in time before the next iteration starts, and so you see the increasing memory usage.
While GC.Collect() may allow you to force memory deallocation, I would strongly advise looking into the suggestions above before resorting to trying to manually manage memory via GC.
[Update] Seeing your parseMessages() and the use of HtmlAgilityPack (a very useful library, by the way) it looks likely there are some large and possibly numerous allocations of memory being performed for every logile.
HtmlAgility allocates memory for various nodes internally, when combined with your buffer array and the allocations in the main function I'm even more confident that the GC is being put under a lot of pressure to keep up.
To stop guessing and get some real metrics, I would run ProcessExplorer and add the columns to show the GC Gen 0,1,2 collections columns. Then run your application and observe the number of collections. If you're seeing large numbers in these columns then the GC is struggling and you should redesign to use less memory allocations.
Alternatively, the free CLR Profiler 2.0 from Microsoft provides nice visual representation of .NET memory allocations within your application.
One thing you may want to try, is temporarily forcing a GC.Collect after each run. The GC is very intelligent, and will not reclaim memory until is feels the expense of a collection is worth the value of any recovered memory.
Edit: I just wanted to add that its important to understand that calling GC.Collect manually is a bad practice (for any normal use case. Abnormal == perhaps a load function for a game or somesuch). You should let the garbage collector decide whats best, as it will generally have more information than avaliable to you about system resources and the like on which to base its collection behaviour.
The try-catch block could use a finally (cleanup). If you look at what the using statement does, it is equivalent to try catch finally. Yes, running GC is a good idea also. Without compiling this code and giving it a try it is hard to say for sure ...
Also, dispose this guy properly using a using:
FileStream destf = new FileStream(destFileName, FileMode.Append);
Look up Effective C# 2nd edition
I would manually clear the array of message and the stringbuilder before the setting them to null.
edit
looking at what the process seem to do I got a suggestion, if it's not too late instead of parsing an html file.
create a dataset schemas and use that to write and read an xml log file and use a xsl file to convert it into an html file.
I don't see any obvious memory leaks; my first guess would be that it's something in the library.
A good tool to figure this kind of thing out is the .NET Memory Profiler, by SciTech. They have a free two-week trial.
Short of that, you could try commenting out some of the library functions, and see if the problem goes away if you just read the files and do nothing with the data.
Also, where are you looking for memory use stats? Keep in mind that the stats reported by Task Manager aren't always very useful or reflective of actual memory use.
HtmlDocument class (as far as I can determin) has a serious memory leak when used from managed code. I reccomend using the XMLDOM parser instead (though this does require well formed documents, but thats another +).