How to open a file in memory? - c#

I have see this term branded around but I don't really understand how you open a file in memory.
I have the files written to disk in a temp location but this needs cleaning when a certain form closes and I can't do it when it's open. It's a must that this folder gets emptied. I was wondering if I opened files in memory instead whether it would make a difference?

MemoryStream inMemoryCopy = new MemoryStream();
using (FileStream fs = File.OpenRead(path))
{
fs.CopyTo(inMemoryCopy);
}
// Now you can delete the file at 'path' and still have an in memory copy

I think you want to work with Memory Mapped files added recently to .NET 4.
http://blogs.msdn.com/b/salvapatuel/archive/2009/06/08/working-with-memory-mapped-files-in-net-4.aspx
Memory Mapped Files .NET

I think it means to read the content of that file into memory as a whole and then close the connection to the file. Assuming it's a file that's not too big you could just read it into a byte[]:
byte[] fileContent = File.ReadAllBytes(fileName);
If it's a text file read it into a string using
string fileContent = File.ReadAllText(fileName);
Once you've done that use a StreamReader to read it later as you would a file on disk.

You can use DeleteOnClose parameter of FileStream constructor:
FileStream fs = new FileStream("<Path Here>", FileMode.Create,
FileAccess.ReadWrite, FileShare.None, 1024, FileOptions.DeleteOnClose);
and the file will be deleted when closed.

Related

Best way to upload file using Stream

I am calling REST API which is accepting Stream to upload file from local device, so for that right now I am using following code to get Stream from a file and than closing that stream after it get's uploaded:
var stream = new FileStream(file, FileMode.Open, FileAccess.ReadWrite);
The problem with the above approach is that, until entire file gets uploaded to server user don't have any chance to delete that file because stream of that file is open, what would be the solution to resolve this issue?
If your typical file is reasonably sized (and I'm hoping you won't be uploading 2GB+ files to a REST API), you could always just read the stream into memory and before feeding it to your API, like so:
using (MemoryStream memoryStream = new MemoryStream())
{
using (FileStream fileStream = new FileStream(file, FileMode.Open, FileAccess.ReadWrite)) {
fileStream.CopyTo(memoryStream);
}
memoryStream.Position = 0; // Reset to origin.
// Now use the MemoryStream as you would a FileStream:
api.Upload(memoryStream);
}
Another alternative is to create a temp copy of the file on your hard drive and feed that to the API - but then dealing with cleanup can become a bit cumbersome. FileOptions.DeleteOnClose is your friend and may very well suffice for your purposes, but it still offers no bulletproof guarantees.

creating a file and allocating storage before writing to it

I have a file that gets GBs of data written to it over time (looping once it reaches the end). I would like to create the file ahead of time and preset the storage so that the required storage is never taken up by other downloads during the writing to the file. This is done using visual studio 2012 in C#.
I have tried:
if (fileSizeRequirement or fileName is changed) //if filePath or file size is changed
{
//Open or create the file, set the file to size requirement, close the filestream
fileStream = new FileStream(fileName, FileMode.OpenOrCreate, FileAccess.ReadWrite);
fileStream.SetLength((long)fileSizeRequirement);
fileStream.Close();
}
1) Is this an appropriate way to "preallocate" space for a folder?
2) Will the SetLength require a seek to the beginning after setting the length or does the position in the folder stay at the beginning?
3) What is the correct way to achieve file preallocation of storage space?
Thanks ahead of time and I appreciate any suggestions.
Using SetLength is a common approach although I'd generally use a using statement here.
using(var fileStream = new FileStream(fileName, FileMode.OpenOrCreate, FileAccess.ReadWrite))
{
fileStream.SetLength((long)fileSizeRequirement);
}
Calling fileStream.Position straight after SetLength yields 0 so you shouldn't need to seek to the beginning.

Cannot access the file because it is being used by another process

My web method creates a pdf file in my %temp% folder and that works. I then want to add some custom fields (meta) to that file using the code below.
The class PdfStamper generates an IOException, whether I use its .Close() method or the using block just ends. The process that is still holding on to the file handle is the webdev web server itself (I'm debugging in VS2010 SP1).
private string AddCustomMetaData(string guid, int companyID, string filePath)
{
try
{
PdfReader reader = new PdfReader(filePath);
using (FileStream fs = new FileStream(filePath, FileMode.Open, FileAccess.ReadWrite, FileShare.ReadWrite))
{
PdfStamper st = new PdfStamper(reader, fs);
Dictionary<string, string> info = reader.Info;
info.Add("Guid", guid);
info.Add("CompanyID", companyID.ToString());
st.MoreInfo = info;
st.Close();
}
reader.Close();
return guid;
}
catch (Exception e)
{
return e.Message;
}
}
No matter what I try, it keeps throwing the exception at st.Close();, to be more precise:
The process cannot access the file 'C:\Users[my
username]\AppData\Local\Temp\53b96eaf-74a6-49d7-a715-6c2e866a63c3.pdf'
because it is being used by another process.
Either I'm overlooking something obvious or there's a problem with the PdfStamper class I'm as of yet unaware of. Versions of itextsharp used are 5.3.3.0 and 5.4.0.0, the issue is the same.
Any insight would be greatly appreciated.
EDIT: I'm currently "coding around" the issue, but I haven't found any solution.
Your problem is that you are writing to a file while you are also reading from it. Unlike some file types (JPG, PNG, etc) that "load" all of the data into memory, iTextSharp reads the data as a stream. You either need to use two files and swap them at the end or you can force iTextSharp to "load" the first file by binding your PdfReader to a byte array of the file.
PdfReader reader = new PdfReader(System.IO.File.ReadAllBytes(filePath));
I suggest you to use the FileShare enumerator when you open the file, so Try to open a file with None sharing
File.Open(fileName, FileMode.Open, FileAccess.Read, FileShare.None);
Try to .Dispose() your PDF reader (or whatever you use for creating it) when you save the file for the first time
Try this solution if you think its feasible for you - Once the webmethod creates file in Temp folder, you need to copy the file and paste it into other location or same location with different name and pass newly copied file path to your PDF reader.

Unzip file (odt or docx) in a memory stream

I have been developing an web application with Asp.Net and I'm using SharpZipLib to work with odt files (from Open Office) and in the future docx files (for ms office). I need to open an odt file (like a zip file) change a xml file inside it, zip again and give it to the browser send to my client.
I can do this in file system but it will get a space in my disk temporarily and we don't want it. I would like to do this in memory (with a MemoryStream class), but I don't know how to unzip folders/files in a memory stream with SharpZipLib, change and use it to zip again. Is there any sample about how to do this?
Thank you
You can use something like
Stream inputStream = //... File.OpenRead(...);
//for read file
ZipInputStream zipInputStream = new ZipInputStream(inputStream));
//for output
MemoryStream memoryStream = new MemoryStream();
using ( ZipOutputStream zipStream = new ZipOutputStream(memoryStream))
{
ZipEntry entry = new ZipEntry("...");
//...
zipStream.PutNextEntry(entry);
zipStream.Write(data, 0, data.Length);
//...
zipStream.Finish();
zipStream.Close();
}
Edit ::
You need in general unZip your file, get ZipEntry , change , and write in ZipOutputStream with MemoryStream.
Use this article http://www.codeproject.com/KB/cs/Zip_UnZip.aspx

How to read a file which is currently used, like Windows does when copying it?

One of my applications is intended to read (and only read) files which may be in use.
But, when reading a file which is already opened in, for example, Microsoft Word, this application throws a System.IO.IOException:
The process cannot access the file '<filename here>' because it is being used by another process.
The code used to read the file is:
using (Stream stream = new FileStream(fileName, FileMode.Open, FileAccess.Read, FileShare.ReadWrite | FileShare.Delete))
{
// Do stuff here.
}
Of course, since the file is already used, this exception is expected.
Now, if I ask the operating system to copy the file to a new location, then to read it, it works:
string tempFileName = Path.GetTempFileName();
File.Copy(fileName, tempFileName, true);
// ↓ We read the newly created file.
using (Stream stream = new FileStream(tempFileName, FileMode.Open, FileAccess.Read, FileShare.ReadWrite | FileShare.Delete))
{
// Do stuff here.
}
What is the magic of File.Copy which allows to read the file already used by an application, and especially how to use this magic to read the file without making a temporary copy?
Nice question there. Have a look at this, it seems to suggest using FileShare.ReadWrite only is the key, it's worth a shot.
http://www.geekzilla.co.uk/viewD21B312F-242A-4038-9E9B-AE6AAB53DAE0.htm
try removing FileShare.ReadWrite | FileShare.Delete from the FileStream constructor, or at least FileShare.Delete.

Categories

Resources