I am loading small pdf files into the buffer an getting the OutOfMemoryEception. File Size 220KB works fine, the next size I have tested is 4,50MB an this file throws the exception. What ist the maximum file size and what can I do to change the max size? 4,5MB ist not that much :-)
This is the related code:
ListViewDataItem dataItem = (ListViewDataItem)e.Item;
int i = dataItem.DisplayIndex;
byte[] buffer = File.ReadAllBytes(Session["pdfFileToSplit"].ToString());
string unique = Guid.NewGuid().ToString();
Session[unique] = buffer;
Panel thumbnailPanel = (Panel)e.Item.FindControl("thumbnails");
Thumbnail thumbnail = new Thumbnail();
thumbnail.SessionKey = unique;
thumbnail.Index = i+1;
thumbnail.DPI = 17;
thumbnail.BorderColor = System.Drawing.Color.Blue;
thumbnailPanel.Controls.Add(thumbnail);
Ok I just saw something really mysterios (for me). I uploaded a file below 10MB an whatching the used memory of the IIS Server(w3wp.exe), nothing dramatic happens, a few MB up, a few down, everything worked fine. Than I've tried the same thing with a 12MB file. At the beginning it appears same, but than, suddenly, out of nowhere, the used memory of the w3wp.exe exploded to 1,5GB an than the server crashes....
The OutOfMemoryException is on server side or client side?
When you useSession[unique] = buffer, you're storing all the files (represented as byte arrays) simultaneously in your session.
That can be a lot of information.
If your session is "InProc", your server will probably run out of memory.
The limit is the memory of the machine.
When your request finishes the memory stays allocated in the session. That's the problem. You should set Session[unique] = null if this isn't the desired behavior, making the session dispose the memory on the server. If you put 10 files, 10 will be simultaneously stored in the session even after the requests finishes. They will be disposed only when the session ends.
Related
I'm doing operations on images. Some of these operations require me to create 3 different versions of the pixel data from the image and then later on combine them and do operantions on it.
For regular/small images the code works fine, I simply initialize my image raster data as new int[size].
However for bigger images with a bigger resolution (600, 1200, ...) the new int[size] throws an OutOfMemoryException. Trying to allocate more than 2GB. However I've built it 64bit (not anycpu or 32bit).
To resolve this issue, I've tried to create a MemoryMappedFile in memory itself. This gave me out of resource also. Next I've tried to create a MemoryMappedFile but by first creating a file on disk and then creating a accessor over the complete file.
Still I'm facing the not enough resources with the temporary file on disk and the MemoryMappedFile/ViewAcessor.
Am I doing something wrong in the code below? I thought the MMF and Accessor would handle the virtual memory paging automagically.
mmfPath = Path.GetTempFileName();
// create a file on disk first
using (var fs = File.OpenWrite(mmfPath))
{
var widthBytes = new byte[width * 4];
for (int y = 0; y < height; y++)
{
fs.Write(widthBytes, 0, widthBytes.Length);
}
}
// open the file on disk as a MMF
_RasterData = MemoryMappedFile.CreateFromFile(mmfPath,
FileMode.OpenOrCreate,
Guid.NewGuid().ToString(),
0, // 0 to set te capacity to the size of the file on disk
MemoryMappedFileAccess.ReadWrite);
_RasterDataAccessor = _RasterData.CreateViewAccessor(); // <-- not enough memory resources
Not enough memory resources are available to process this command.
at System.IO.__Error.WinIOError(Int32 errorCode, String maybeFullPath)
at System.IO.MemoryMappedFiles.MemoryMappedView.CreateView(SafeMemoryMappedFileHandle memMappedFileHandle, MemoryMappedFileAccess access, Int64 offset, Int64 size)
at System.IO.MemoryMappedFiles.MemoryMappedFile.CreateViewAccessor(Int64 offset, Int64 size, MemoryMappedFileAccess access)
at System.IO.MemoryMappedFiles.MemoryMappedFile.CreateViewAccessor()
...
In case I can resolve the problem above, I think I will later on run again on the same issue when I need to create a resulting bitmap out of the pixeldata again. (2GB limit).
The goal is working with big images (and temporary copies of its pixeldata for raster/raster operations).
The current issue is that I'm getting Out of memory resources with MemoryMappedFile. Where I thought this would resolve the 2GB limit and that Windows/Framework would handle the virtual memory paging issues.
(.NET Framework 4.8 - 64bit build.)
I'm currently working on a file downloader project. The application is designed so as to support resumable downloads. All downloaded data and its metadata(download ranges) are stored on the disk immediately per call to ReadBytes. Let's say that I used the following code snippet :-
var reader = new BinaryReader(response.GetResponseStream());
var buffr = reader.ReadBytes(_speedBuffer);
DownloadSpeed += buffr.Length;//used for reporting speed and zeroed every second
Here _speedBuffer is the number of bytes to download which is set to a default value.
I have tested the application by two methods. First is by downloading a file which is hosted on a local IIS server. The speed is great. Secondly, I tried to download the same file's copy(from where it was actually downloaded) from the internet. My net speed is real slow. Now, what I observed that if I increase the _speedBuffer then the downloading speed from the local server is good but for the internet copy, speed reporting is slow. Whereas if I decrease the value of _speedBuffer, the downloading speed(reporting) for the file's internet copy is good but not for the local server. So I thought, why shouldn't I change the _speedBuffer at runtime. But all the custom algorithms(for changing the value) I came up with were in-efficient. Means the download speed was still slow as compared other downloaders.
Is this approach OK?
Am I doing it the wrong way?
Should I stick with default value for _speedBuffer(byte count)?
The problem with ReadBytes in this case is that it attempts to read exactly that number of bytes, or it returns when there is no more data to read.
So you receive a packet containing 99 bytes of data, then calling ReadBytes(100) will wait for the next packet to include that missing byte.
I wouldn't use a BinaryReader at all:
byte[] buffer = new byte[bufferSize];
using (Stream responseStream = response.GetResponseStream())
{
int bytes;
while ((bytes = responseStream.Read(buffer, 0, buffer.Length)) > 0)
{
DownloadSpeed += bytes;//used for reporting speed and zeroed every second
// on each iteration, "bytes" bytes of the buffer have been filled, store these to disk
}
// bytes was 0: end of stream
}
I want to read big TXT file size is 500 MB,
First I use
var file = new StreamReader(_filePath).ReadToEnd();
var lines = file.Split(new[] { '\n' });
but it throw out of memory Exception then I tried to read line by line but again after reading around 1.5 million lines it throw out of memory Exception
using (StreamReader r = new StreamReader(_filePath))
{
while ((line = r.ReadLine()) != null)
_lines.Add(line);
}
or I used
foreach (var l in File.ReadLines(_filePath))
{
_lines.Add(l);
}
but Again I received
An exception of type 'System.OutOfMemoryException' occurred in
mscorlib.dll but was not handled in user code
My Machine is powerful machine with 8GB of ram so it shouldn't be my machine problem.
p.s: I tried to open this file in NotePadd++ and I received 'the file is too big to be opened' exception.
Just use File.ReadLines which returns an IEnumerable<string> and doesn't load all the lines at once to the memory.
foreach (var line in File.ReadLines(_filePath))
{
//Don't put "line" into a list or collection.
//Just make your processing on it.
}
The cause of exception seem to be growing _lines collection but not reading big file. You are reading line and adding to some collection _lines which will be taking memory and causing out of memory execption. You can apply filters to only put the required lines to _lines collection.
I know this is an old post but Google sent me here in 2021..
Just to emphasize igrimpe's comments above:
I've run into an OutOfMemoryException on StreamReader.ReadLine() recently looping through folders of giant text files.
As igrimpe mentioned, you can sometimes encounter this where your input file exhibits a lack of uniformity in line breaks. If you are looping through a textfile and encounter this, double check your input file for unexpected characters / ascii encoded hex or binary strings, etc.
In my case, I split the 60 gb problematic file into 256mb chunks, had my file iterator stash the problematic textfiles as part of the exception trap and later remedied the problem textfiles by removing the problematic lines.
Edit:
loading the whole file in memory will be causing objects to grow, and .net will throw OOM exceptions if it cannot allocate enough contiguous memory for an object.
The answer is still the same, you need to stream the file, not read the entire contents. That may require a rearchitecture of your application, however using IEnumerable<> methods you can stack up business processes in different areas of the applications and defer processing.
A "powerful" machine with 8GB of RAM isn't going to be able to store a 500GB file in memory, as 500 is bigger than 8. (plus you don't get 8 as the operating system will be holding some, you can't allocate all memory in .Net, 32-bit has a 2GB limit, opening the file and storing the line will hold the data twice, there is an object size overhead....)
You can't load the whole thing into memory to process, you will have to stream the file through your processing.
You have to count the lines first.
It is slower, but you can read up to 2,147,483,647 lines.
int intNoOfLines = 0;
using (StreamReader oReader = new
StreamReader(MyFilePath))
{
while (oReader.ReadLine() != null) intNoOfLines++;
}
string[] strArrLines = new string[intNoOfLines];
int intIndex = 0;
using (StreamReader oReader = new
StreamReader(MyFilePath))
{
string strLine;
while ((strLine = oReader.ReadLine()) != null)
{
strArrLines[intIndex++] = strLine;
}
}
For anyone else having this issue:
If you're running out of memory while using StreamReader.ReadLine(), I'd be willing to bet your file doesn't have multiple lines to begin with. You're just assuming it does. It's an easy mistake to make because you can't just open a 10GB file with Notepad.
One time I received a 10GB file from a client that was supposed to be a list of numbers and instead of using '\n' as a separator, he used commas. The whole file was a single line which obviously caused ReadLine() to blow up.
Try reading a few thousand characters from your stream using StreamReader.Read() and see if you can find a '\n'. Odds are you won't.
In mono for android I have an app that saves images to local storage for caching purposes. When the app launches it tries to load images from the cache before trying to load them from the web.
I'm currently having a hard time finding a good way to read and load them from local storage.
I'm currently using something equivilant to this:
List<byte> byteList = new List<byte>();
using (System.IO.BinaryReader binaryReader = new System.IO.BinaryReader(context.OpenFileInput("filename.jpg")))
{
while (binaryReader.BaseStream.IsDataAvailable())
{
byteList.Add(binaryReader.ReadByte());
}
}
return byteList.toArray();
OpenFileInput() returns a stream that does not give me a length so I have to read one byte at a time. It also can't seek. This seems to be causing images to load much slower than they aughto. Loading images from Resrouce.Drawable is almost instantanious by comparison, but with my method there a very noticable pause, maybe 300ms, for loading a 8kb file. This seems like a really obvious task to be able to do, but I've tried many solutions and searched a lot for advise but to no avail.
I've also noticed this code seems to crash with an EndOfStream exception when not run on the UI thread.
Any help would be hugely appreciated
What do you intend on doing with the List<byte>? You want to "load images from the cache," but you don't specify what you want to load them into.
If you want to load them into a Android.Graphics.Bitmap, you could use BitmapFactory.DecodeStream(Stream):
Bitmap bitmap = BitmapFactory.DecodeStream(context.OpenFileInput("filename.jpg"));
This would remove the List<byte> intermediary.
If you really need all the bytes (for whatever reason), you can rely on the fact that System.Environment.GetFolderPath(System.Environment.SpecialFolder.Personal) is the same as Context.FilesDir, which is what context.OpenFileInput() will use, permitting:
byte[] bytes = System.IO.File.ReadAllBytes(
Path.Combine (
System.Environment.GetFolderPath(System.Environment.SpecialFolder.Personal),
"filename.jpg"));
However, if this is truly a cache, you should be using Context.CacheDir instead of Context.FilesDir, which is Path.GetTempPath returns:
byte[] cachedBytes = System.IO.File.ReadAllBytes(
Path.Combine(System.IO.Path.GetTempPath(), "filename.jpg"));
I currently have a download site for my school that is based in .net. We offer anything from antivirus, autocad, spss, office, and a number of large applications for students to download. It's currently setup to handle them in 1 of 2 ways; anything over 800 megs is directly accessable through a seperate website while under 800 megs is secured behind .net code using a filestream to feed it to the user in 10,000 byte chunks. I have all sorts of issues with feeding downloads this way... I'd like to be able to secure the large downloads, but the .net site just can't handle it, and the smaller files will often fail. Is there a better approach to this?
edit - I just wanted to update on how I finally solved this: I ended up adding my download directory as a virtual directory in iis and specified custom http handler. The handler grabbed the file name from the request and checked for permissions based on that, then either redirected the users to a error/login page, or let the download continue. I've had no problems with this solution, and I've been on it for probably 7 months now, serving files several gigs in size.
If you are having performance issues and you are delivering files that exist on the filesystem (versus a DB), use the HttpResponse.TransmitFile function.
As for the failures, you likely have a bug. If you post the code you may be better response.
Look into bit torrent. It's designed specifically for this sort of thing and is quite flexible.
I have two recommendations:
Increase the buffer size so that there are less iterations
AND/OR
Do not call IsClientConnected on each iteration.
The reason is that according to Microsoft Guidelines:
Response.IsClientConnected has some costs, so only use it before an operation that takes at least, say 500 milliseconds (that's a long time if you're trying to sustain a throughput of dozens of pages per second). As a general rule of thumb, don't call it in every iteration of a tight loop
Whats wrong with using a robust web server (like Apache) and let it deal with files. Just as you now separate larger files to a webserver, why not serve smaller files the same way too?
Is there some hidden requirements to prevent this?
Ok, this is what it currently looks like...
Stream iStream = null;
// Buffer to read 10K bytes in chunk:
byte[] buffer = new Byte[10000];
// Length of the file:
int length;
// Total bytes to read:
long dataToRead;
if (File.Exists(localfilename))
{
try
{
// Open the file.
iStream = new System.IO.FileStream(localfilename, System.IO.FileMode.Open, System.IO.FileAccess.Read, System.IO.FileShare.Read);
// Total bytes to read:
dataToRead = iStream.Length;
context.Response.Clear();
context.Response.Buffer = false;
context.Response.ContentType = "application/octet-stream";
Int64 fileLength = iStream.Length;
context.Response.AddHeader("Content-Length", fileLength.ToString());
context.Response.AddHeader("Content-Disposition", "attachment; filename=" + originalFilename);
// Read the bytes.
while (dataToRead > 0)
{
// Verify that the client is connected.
if (context.Response.IsClientConnected)
{
// Read the data in buffer.
length = iStream.Read(buffer, 0, 10000);
// Write the data to the current output stream.
context.Response.OutputStream.Write(buffer, 0, length);
// Flush the data to the HTML output.
context.Response.Flush();
buffer = new Byte[10000];
dataToRead = dataToRead - length;
}
else
{
//prevent infinite loop if user disconnects
dataToRead = -1;
}
}
iStream.Close();
iStream.Dispose();
}
catch (Exception ex)
{
if (iStream != null)
{
iStream.Close();
iStream.Dispose();
}
if (ex.Message.Contains("The remote host closed the connection"))
{
context.Server.ClearError();
context.Trace.Warn("GetFile", "The remote host closed the connection");
}
else
{
context.Trace.Warn("IHttpHandler", "DownloadFile: - Error occurred");
context.Trace.Warn("IHttpHandler", "DownloadFile: - Exception", ex);
}
context.Response.Redirect("default.aspx");
}
}
There's a lot of licensing restrictions... for example we have an Office 2007 license agreement that says any technical staff on campus can download and install Office, but not students. So we don't let students download it. So our solution was to hide those downloads behind .net.
Amazon S3 sounds ideal for what you need, but it is commercial service and fileas are served from their servers.
You should try to contact amazon and ask for academic pricing. Even if they don't have one.