Cannot read file in uploaded path(used by another process) - c#

I upload a csv file to AppData folder in project solution and read the content with the code below:
using (var fs = new FileStream(Path.Combine(uploadPath, name), chunk == 0 ? FileMode.Create : FileMode.Append))
{
var buffer = new byte[fileUpload.InputStream.Length];
fileUpload.InputStream.Read(buffer, 0, buffer.Length);
fs.Write(buffer, 0, buffer.Length);
var reader = new StreamReader(System.IO.File.OpenRead(fs.Name));// I check path and file itself in AppData folder. its ok
List<string> listA = new List<string>();
List<string> listB = new List<string>();
while (!reader.EndOfStream)
{
var line = reader.ReadLine();
var values = line.Split(';');
}
But it throws IOException with message:
The process cannot access the file 'c:\path\App_Data\o_1amtdiagc18991ndq1c1k1c2v1bama.csv'
because it is being used by another process.
I couldnt get it how and why it's being used I created this file from original uploaded file with unique name..

I couldnt get it how and why its being used
Because you've not closed the stream that's writing to it:
using (var fs = new FileStream(Path.Combine(uploadPath, name), ...)
I would suggest you write the file, close the using statement so the handle can be released, then read it:
string fullName = Path.Combine(uploadPath, name);
using (var fs = ...)
{
// Code as before, but ideally taking note of the return value
// of Stream.Read, that you're currently ignoring. Consider
// using Stream.CopyTo
}
// Now the file will be closed
using (var reader = File.OpenText(fullName))
{
// Read here
}

Related

C# - How can I download a zip file from url, unzip it, and read the extracted files, all in memory? [duplicate]

I have files (from 3rd parties) that are being FTP'd to a directory on our server. I download them and process them even 'x' minutes. Works great.
Now, some of the files are .zip files. Which means I can't process them. I need to unzip them first.
FTP has no concept of zip/unzipping - so I'll need to grab the zip file, unzip it, then process it.
Looking at the MSDN zip api, there seems to be no way i can unzip to a memory stream?
So is the only way to do this...
Unzip to a file (what directory? need some -very- temp location ...)
Read the file contents
Delete file.
NOTE: The contents of the file are small - say 4k <-> 1000k.
Zip compression support is built in:
using System.IO;
using System.IO.Compression;
// ^^^ requires a reference to System.IO.Compression.dll
static class Program
{
const string path = ...
static void Main()
{
using(var file = File.OpenRead(path))
using(var zip = new ZipArchive(file, ZipArchiveMode.Read))
{
foreach(var entry in zip.Entries)
{
using(var stream = entry.Open())
{
// do whatever we want with stream
// ...
}
}
}
}
}
Normally you should avoid copying it into another stream - just use it "as is", however, if you absolutely need it in a MemoryStream, you could do:
using(var ms = new MemoryStream())
{
stream.CopyTo(ms);
ms.Position = 0; // rewind
// do something with ms
}
You can use ZipArchiveEntry.Open to get a stream.
This code assumes the zip archive has one text file.
using (FileStream fs = new FileStream(path, FileMode.Open))
using (ZipArchive zip = new ZipArchive(fs) )
{
var entry = zip.Entries.First();
using (StreamReader sr = new StreamReader(entry.Open()))
{
Console.WriteLine(sr.ReadToEnd());
}
}
using (ZipArchive archive = new ZipArchive(webResponse.GetResponseStream()))
{
foreach (ZipArchiveEntry entry in archive.Entries)
{
Stream s = entry.Open();
var sr = new StreamReader(s);
var myStr = sr.ReadToEnd();
}
}
Looks like here is what you need:
using (var za = ZipFile.OpenRead(path))
{
foreach (var entry in za.Entries)
{
using (var r = new StreamReader(entry.Open()))
{
//your code here
}
}
}
You can use SharpZipLib among a variety of other libraries to achieve this.
You can use the following code example to unzip to a MemoryStream, as shown on their wiki:
using ICSharpCode.SharpZipLib.Zip;
// Compresses the supplied memory stream, naming it as zipEntryName, into a zip,
// which is returned as a memory stream or a byte array.
//
public MemoryStream CreateToMemoryStream(MemoryStream memStreamIn, string zipEntryName) {
MemoryStream outputMemStream = new MemoryStream();
ZipOutputStream zipStream = new ZipOutputStream(outputMemStream);
zipStream.SetLevel(3); //0-9, 9 being the highest level of compression
ZipEntry newEntry = new ZipEntry(zipEntryName);
newEntry.DateTime = DateTime.Now;
zipStream.PutNextEntry(newEntry);
StreamUtils.Copy(memStreamIn, zipStream, new byte[4096]);
zipStream.CloseEntry();
zipStream.IsStreamOwner = false; // False stops the Close also Closing the underlying stream.
zipStream.Close(); // Must finish the ZipOutputStream before using outputMemStream.
outputMemStream.Position = 0;
return outputMemStream;
// Alternative outputs:
// ToArray is the cleaner and easiest to use correctly with the penalty of duplicating allocated memory.
byte[] byteArrayOut = outputMemStream.ToArray();
// GetBuffer returns a raw buffer raw and so you need to account for the true length yourself.
byte[] byteArrayOut = outputMemStream.GetBuffer();
long len = outputMemStream.Length;
}
Ok so combining all of the above, suppose you want to in a very simple way take a zip file called
"file.zip" and extract it to "C:\temp" folder. (Note: This example was only tested for compress text files) You may need to do some modifications for binary files.
using System.IO;
using System.IO.Compression;
static void Main(string[] args)
{
//Call it like this:
Unzip("file.zip",#"C:\temp");
}
static void Unzip(string sourceZip, string targetPath)
{
using (var z = ZipFile.OpenRead(sourceZip))
{
foreach (var entry in z.Entries)
{
using (var r = new StreamReader(entry.Open()))
{
string uncompressedFile = Path.Combine(targetPath, entry.Name);
File.WriteAllText(uncompressedFile,r.ReadToEnd());
}
}
}
}

Sharepoint ClientObject Model download files C#

I am trying to download files from a SharePoint library using the client object model. I seem to be able to access the files using OpenBinaryStream() and then executing the query, but when I try to access the stream, it is a stream of Length = 0. I've seen many examples and I've tried several, but I can't get the files to download. I've uploaded successfully, and credentials and permissions aren't the problem. Anyone have any thoughts?
public SharepointFileContainer DownloadFolder(bool includeSubfolders, params object[] path)
{
try
{
List<string> pathStrings = new List<string>();
foreach (object o in path)
pathStrings.Add(o.ToString());
var docs = _context.Web.Lists.GetByTitle(Library);
_context.Load(docs);
_context.ExecuteQuery();
var rootFolder = docs.RootFolder;
_context.Load(rootFolder);
_context.ExecuteQuery();
var folder = GetFolder(rootFolder, pathStrings);
var files = folder.Files;
_context.Load(files);
_context.ExecuteQuery();
SharepointFileContainer remoteFiles = new SharepointFileContainer();
foreach (Sharepoint.File f in files)
{
_context.Load(f);
var file = f.OpenBinaryStream();
_context.ExecuteQuery();
var memoryStream = new MemoryStream();
file.Value.CopyTo(memoryStream);
remoteFiles.Files.Add(f.Name, memoryStream);
}
...
}
SharepointFileContainer is just a custom class for my calling application to dispose of the streams when it has finished processing them. GetFolder is a recursive method to drill down the given folder path. I've had problems with providing the direct url and have had the most success with this.
My big question is why "file.Value" is a Stream with a Length == 0?
Thanks in advance!
EDIT:
Thanks for your input so far...unfortunately I'm experiencing the same problem. Both solutions pitched make use of OpenBinaryDirect. The resulting FileInformation class has this for the stream...
I'm still getting a file with 0 bytes downloaded.
You need to get the list item of the file (as a ListItem object) and then use it's property File. Something like:
//...
// Previous code
//...
var docs = _context.Web.Lists.GetByTitle(Library);
var listItem = docs.GetItemById(listItemId);
_context.Load(docs);
clientContext.Load(listItem, i => i.File);
clientContext.ExecuteQuery();
var fileRef = listItem.File.ServerRelativeUrl;
var fileInfo = Microsoft.SharePoint.Client.File.OpenBinaryDirect(clientContext, fileRef);
var fileName = Path.Combine(filePath,(string)listItem.File.Name);
using (var fileStream = System.IO.File.Create(fileName))
{
fileInfo.Stream.CopyTo(fileStream);
}
After that you do whatever you need to do with the stream. The current one just saves it to the specified path, but you can also download it in the browser, etc..
We can use the following code to get the memory stream.
var fileInformation = Microsoft.SharePoint.Client.File.OpenBinaryDirect(clientContext, file.ServerRelativeUrl);
if (fileInformation != null && fileInformation.Stream != null)
{
using (MemoryStream memoryStream = new MemoryStream())
{
byte[] buffer = new byte[32768];
int bytesRead;
do
{
bytesRead = fileInformation.Stream.Read(buffer, 0, buffer.Length);
memoryStream.Write(buffer, 0, bytesRead);
} while (bytesRead != 0);
}
}
Reference: https://praveenkasireddy.wordpress.com/2012/11/11/download-document-from-document-set-using-client-object-model-om/

Archiving Multiple Files Using ZipOutputStream gives me an Empty Archived File

I am attempting to download a bunch of files that I am zipping up(archiving) via ZipoutputStream.
using (var zipStream = new ZipOutputStream(outputMemStream))
{
foreach (var documentIdString in documentUniqueIdentifiers)
{
...
var blockBlob = container.GetBlockBlobReference(documentId.ToString());
var fileMemoryStream = new MemoryStream();
blockBlob.DownloadToStream(fileMemoryStream);
zipStream.SetLevel(3);
fileMemoryStream.Position = 0;
ZipEntry newEntry = new ZipEntry(document.FileName);
newEntry.DateTime = DateTime.Now;
zipStream.PutNextEntry(newEntry);
fileMemoryStream.Seek(0, SeekOrigin.Begin);
StreamUtils.Copy(fileMemoryStream, zipStream, new byte[4096]);
zipStream.IsStreamOwner = false; // False stops the Close also Closing the underlying stream.
}
outputMemStream.Seek(0, SeekOrigin.Begin);
return outputMemStream;
}
In my controller I am returning the following code that should download the Zip file i created in the previous example. The controller actions downloads the file as it should in the browser, but the Archived File is empty. I can see the content length populated returning from the method above...
file.Seek(0, SeekOrigin.Begin);
return File(file, "application/octet-stream", "Archive.zip");
Does anyone have any idea why my file that is returned by my controller is empty or corrupt?
I believe you need to close your entries and your final zip stream. You should also using and dispose all of your streams. Try this:
using (var zipStream = new ZipOutputStream(outputMemStream))
{
zipStream.IsStreamOwner = false;
// Set compression level
zipStream.SetLevel(3);
foreach (var documentIdString in documentUniqueIdentifiers)
{
...
var blockBlob = container.GetBlockBlobReference(documentId.ToString());
using (var fileMemoryStream = new MemoryStream())
{
// Populate stream with bytes
blockBlob.DownloadToStream(fileMemoryStream);
// Create zip entry and set date
ZipEntry newEntry = new ZipEntry(document.FileName);
newEntry.DateTime = DateTime.Now;
// Put entry RECORD, not actual data
zipStream.PutNextEntry(newEntry);
// Copy date to zip RECORD
StreamUtils.Copy(fileMemoryStream, zipStream, new byte[4096]);
// Mark this RECORD closed in the zip
zipStream.CloseEntry();
}
}
// Close the zip stream, parent stays open due to !IsStreamOwner
zipStream.Close();
outputMemStream.Seek(0, SeekOrigin.Begin);
return outputMemStream;
}
EDIT - you should remove:
// Reset position of stream
fileMemoryStream.Position = 0;
Pretty sure that's the problem.

How can I unzip a file to a .NET memory stream?

I have files (from 3rd parties) that are being FTP'd to a directory on our server. I download them and process them even 'x' minutes. Works great.
Now, some of the files are .zip files. Which means I can't process them. I need to unzip them first.
FTP has no concept of zip/unzipping - so I'll need to grab the zip file, unzip it, then process it.
Looking at the MSDN zip api, there seems to be no way i can unzip to a memory stream?
So is the only way to do this...
Unzip to a file (what directory? need some -very- temp location ...)
Read the file contents
Delete file.
NOTE: The contents of the file are small - say 4k <-> 1000k.
Zip compression support is built in:
using System.IO;
using System.IO.Compression;
// ^^^ requires a reference to System.IO.Compression.dll
static class Program
{
const string path = ...
static void Main()
{
using(var file = File.OpenRead(path))
using(var zip = new ZipArchive(file, ZipArchiveMode.Read))
{
foreach(var entry in zip.Entries)
{
using(var stream = entry.Open())
{
// do whatever we want with stream
// ...
}
}
}
}
}
Normally you should avoid copying it into another stream - just use it "as is", however, if you absolutely need it in a MemoryStream, you could do:
using(var ms = new MemoryStream())
{
stream.CopyTo(ms);
ms.Position = 0; // rewind
// do something with ms
}
You can use ZipArchiveEntry.Open to get a stream.
This code assumes the zip archive has one text file.
using (FileStream fs = new FileStream(path, FileMode.Open))
using (ZipArchive zip = new ZipArchive(fs) )
{
var entry = zip.Entries.First();
using (StreamReader sr = new StreamReader(entry.Open()))
{
Console.WriteLine(sr.ReadToEnd());
}
}
using (ZipArchive archive = new ZipArchive(webResponse.GetResponseStream()))
{
foreach (ZipArchiveEntry entry in archive.Entries)
{
Stream s = entry.Open();
var sr = new StreamReader(s);
var myStr = sr.ReadToEnd();
}
}
Looks like here is what you need:
using (var za = ZipFile.OpenRead(path))
{
foreach (var entry in za.Entries)
{
using (var r = new StreamReader(entry.Open()))
{
//your code here
}
}
}
You can use SharpZipLib among a variety of other libraries to achieve this.
You can use the following code example to unzip to a MemoryStream, as shown on their wiki:
using ICSharpCode.SharpZipLib.Zip;
// Compresses the supplied memory stream, naming it as zipEntryName, into a zip,
// which is returned as a memory stream or a byte array.
//
public MemoryStream CreateToMemoryStream(MemoryStream memStreamIn, string zipEntryName) {
MemoryStream outputMemStream = new MemoryStream();
ZipOutputStream zipStream = new ZipOutputStream(outputMemStream);
zipStream.SetLevel(3); //0-9, 9 being the highest level of compression
ZipEntry newEntry = new ZipEntry(zipEntryName);
newEntry.DateTime = DateTime.Now;
zipStream.PutNextEntry(newEntry);
StreamUtils.Copy(memStreamIn, zipStream, new byte[4096]);
zipStream.CloseEntry();
zipStream.IsStreamOwner = false; // False stops the Close also Closing the underlying stream.
zipStream.Close(); // Must finish the ZipOutputStream before using outputMemStream.
outputMemStream.Position = 0;
return outputMemStream;
// Alternative outputs:
// ToArray is the cleaner and easiest to use correctly with the penalty of duplicating allocated memory.
byte[] byteArrayOut = outputMemStream.ToArray();
// GetBuffer returns a raw buffer raw and so you need to account for the true length yourself.
byte[] byteArrayOut = outputMemStream.GetBuffer();
long len = outputMemStream.Length;
}
Ok so combining all of the above, suppose you want to in a very simple way take a zip file called
"file.zip" and extract it to "C:\temp" folder. (Note: This example was only tested for compress text files) You may need to do some modifications for binary files.
using System.IO;
using System.IO.Compression;
static void Main(string[] args)
{
//Call it like this:
Unzip("file.zip",#"C:\temp");
}
static void Unzip(string sourceZip, string targetPath)
{
using (var z = ZipFile.OpenRead(sourceZip))
{
foreach (var entry in z.Entries)
{
using (var r = new StreamReader(entry.Open()))
{
string uncompressedFile = Path.Combine(targetPath, entry.Name);
File.WriteAllText(uncompressedFile,r.ReadToEnd());
}
}
}
}

ResXResourceWriter not writing to stream?

I'm trying to create a resx file and write it to a stream so that I might return it as a string instead of immediately saving it to a file. However, when I try to read that stream, it is empty. What am I doing wrong here? i did verify that the entries are not null. I can actually use the ResXResourceWriter constructor that saves it to disk successfully, but I'm trying to avoid using temp files. Also, I can see the stream is 0k before the loop and about 8k in length after the loop.
using (var stream = new MemoryStream())
{
using (var resx = new ResXResourceWriter(stream))
{
// build the resx and write to memory
foreach (var entry in InputFile.Entries.Values)
{
resx.AddResource(new ResXDataNode(entry.Key, entry.Value) { Comment = entry.Comment });
}
var reader = new StreamReader(stream);
var text = reader.ReadToEnd(); // text is an empty string here!
return null;
}
}
You need to flush and reset the output/stream before trying to read it. This should work, using Generate and Position:
resx.Generate();
stream.Position = 0;
var reader = new StreamReader(stream);
var text = reader.ReadToEnd();
return text;

Categories

Resources