C# should I close the streams when im using "using"? - c#

I have a service running on a server that zip files and I notice that each day the memory consumed increases, when I deployed it on the server it was consuming 3.6Mb, today, 3 months later it was consuming 180Mb.
This is part of the code that I'm using:
for (i = 0; i < files.Count; i++)
{
try
{
if (File.Exists(#dir + zipToUpdate) && new FileInfo(#dir + zipToUpdate).Length < 104857600)
{
using (FileStream zipToOpen = new FileStream(#dir + zipToUpdate, FileMode.Open))
{
using (ZipArchive archive = new ZipArchive(zipToOpen, ZipArchiveMode.Update, false))
{
if (File.GetCreationTime(#dir + files.ElementAt(i)).AddHours(FileAge) < DateTime.Now)
{
ZipArchiveEntry fileEntry = archive.CreateEntry(files.ElementAt(i));
using (BinaryWriter writer = new BinaryWriter(fileEntry.Open()))
{
using (FileStream sr = new FileStream(#dir + files.ElementAt(i), FileMode.Open, FileAccess.Read))
{
byte[] block = new byte[32768];
int bytesRead = 0;
while ((bytesRead = sr.Read(block, 0, block.Length)) > 0)
{
writer.Write(block, 0, bytesRead);
block = new byte[32768];
}
}
}
File.Delete(#dir + files.ElementAt(i));
}
}
}
}
else
{
createZip(files.GetRange(i, files.Count-i), dir + "\\", getZipName(dir, zipToUpdate));
return;
}
}
catch (Exception ex)
{
rootlog.Error(string.Format("Erro Run - updateZip: {0}", ex.Message));
}
}
The creation of the zip or the update are similar so there is no point in paste both codes.
I do a recursive call of this for the folders inside and the service runs once each hour.
So, my question is if all these streams is what is making my memory usage increase month after month or if it can be something else.

The using statement takes care of closing the IDisposable object that it opens. This is not the source of the potential memory leak you're observing.

Related

Upload file chunks to SPS 2013 - Method "StartUpload" does not exist at line

I am trying to upload a large file (1 GB) from code to SharePoint 2013 on prem. I followed this tutorial, I dowloaded from NuGet the package "Microsoft.SharePointOnline.CSOM" and tried this piece of code:
public Microsoft.SharePoint.Client.File UploadFileSlicePerSlice(ClientContext ctx, string libraryName, string fileName, int fileChunkSizeInMB = 3)
{
// Each sliced upload requires a unique ID.
Guid uploadId = Guid.NewGuid();
// Get the name of the file.
string uniqueFileName = Path.GetFileName(fileName);
// Ensure that target library exists, and create it if it is missing.
if (!LibraryExists(ctx, ctx.Web, libraryName))
{
CreateLibrary(ctx, ctx.Web, libraryName);
}
// Get the folder to upload into.
List docs = ctx.Web.Lists.GetByTitle(libraryName);
ctx.Load(docs, l => l.RootFolder);
// Get the information about the folder that will hold the file.
ctx.Load(docs.RootFolder, f => f.ServerRelativeUrl);
ctx.ExecuteQuery();
// File object.
Microsoft.SharePoint.Client.File uploadFile;
// Calculate block size in bytes.
int blockSize = fileChunkSizeInMB * 1024 * 1024;
// Get the information about the folder that will hold the file.
ctx.Load(docs.RootFolder, f => f.ServerRelativeUrl);
ctx.ExecuteQuery();
// Get the size of the file.
long fileSize = new FileInfo(fileName).Length;
if (fileSize <= blockSize)
{
// Use regular approach.
using (FileStream fs = new FileStream(fileName, FileMode.Open))
{
FileCreationInformation fileInfo = new FileCreationInformation();
fileInfo.ContentStream = fs;
fileInfo.Url = uniqueFileName;
fileInfo.Overwrite = true;
uploadFile = docs.RootFolder.Files.Add(fileInfo);
ctx.Load(uploadFile);
ctx.ExecuteQuery();
// Return the file object for the uploaded file.
return uploadFile;
}
}
else
{
// Use large file upload approach.
ClientResult<long> bytesUploaded = null;
FileStream fs = null;
try
{
fs = System.IO.File.Open(fileName, FileMode.Open, FileAccess.Read, FileShare.ReadWrite);
using (BinaryReader br = new BinaryReader(fs))
{
byte[] buffer = new byte[blockSize];
Byte[] lastBuffer = null;
long fileoffset = 0;
long totalBytesRead = 0;
int bytesRead;
bool first = true;
bool last = false;
// Read data from file system in blocks.
while ((bytesRead = br.Read(buffer, 0, buffer.Length)) > 0)
{
totalBytesRead = totalBytesRead + bytesRead;
// You've reached the end of the file.
if (totalBytesRead == fileSize)
{
last = true;
// Copy to a new buffer that has the correct size.
lastBuffer = new byte[bytesRead];
Array.Copy(buffer, 0, lastBuffer, 0, bytesRead);
}
if (first)
{
using (MemoryStream contentStream = new MemoryStream())
{
// Add an empty file.
FileCreationInformation fileInfo = new FileCreationInformation();
fileInfo.ContentStream = contentStream;
fileInfo.Url = uniqueFileName;
fileInfo.Overwrite = true;
uploadFile = docs.RootFolder.Files.Add(fileInfo);
// Start upload by uploading the first slice.
using (MemoryStream s = new MemoryStream(buffer))
{
// Call the start upload method on the first slice.
bytesUploaded = uploadFile.StartUpload(uploadId, s);
ctx.ExecuteQuery();//<------here exception
// fileoffset is the pointer where the next slice will be added.
fileoffset = bytesUploaded.Value;
}
// You can only start the upload once.
first = false;
}
}
else
{
// Get a reference to your file.
uploadFile = ctx.Web.GetFileByServerRelativeUrl(docs.RootFolder.ServerRelativeUrl + System.IO.Path.AltDirectorySeparatorChar + uniqueFileName);
if (last)
{
// Is this the last slice of data?
using (MemoryStream s = new MemoryStream(lastBuffer))
{
// End sliced upload by calling FinishUpload.
uploadFile = uploadFile.FinishUpload(uploadId, fileoffset, s);
ctx.ExecuteQuery();
// Return the file object for the uploaded file.
return uploadFile;
}
}
else
{
using (MemoryStream s = new MemoryStream(buffer))
{
// Continue sliced upload.
bytesUploaded = uploadFile.ContinueUpload(uploadId, fileoffset, s);
ctx.ExecuteQuery();
// Update fileoffset for the next slice.
fileoffset = bytesUploaded.Value;
}
}
}
} // while ((bytesRead = br.Read(buffer, 0, buffer.Length)) > 0)
}
}
finally
{
if (fs != null)
{
fs.Dispose();
}
}
}
return null;
}
But I'm getting runtime exception : ServerExecution with the message: Method "StartUpload" does not exist at line "ctx.ExecuteQuery();" (<-- I marked this line in the code)
I also tried with SharePoint2013 package and the method "startupload" doesn't supported in this package.
UPDATE:
Adam's code worked for ~1GB files it turns out that inside web.config in the path : C:\inetpub\wwwroot\wss\VirtualDirectories\{myport}\web.config
at the part <requestLimit maxAllowedContentLength="2000000000"/> that's in bytes and not kilobytes as I thougt at the begining, therefore I changed to 2000000000 and it worked.
method to upload 1 GB file on SP 2013 using CSOM that works (tested and developed for couple of days of trying different approaches :) )
try
{
Console.WriteLine("start " + DateTime.Now.ToLongDateString() + " " + DateTime.Now.ToLongTimeString());
using (ClientContext context = new ClientContext("[URL]"))
{
context.Credentials = new NetworkCredential("[LOGIN]","[PASSWORD]","[DOMAIN]");
context.RequestTimeout = -1;
Web web = context.Web;
if (context.HasPendingRequest)
context.ExecuteQuery();
byte[] fileBytes;
using (var fs = new FileStream(#"D:\OneGB.rar", FileMode.Open, FileAccess.Read))
{
fileBytes = new byte[fs.Length];
int bytesRead = fs.Read(fileBytes, 0, fileBytes.Length);
}
using (var fileStream = new System.IO.MemoryStream(fileBytes))
{
Microsoft.SharePoint.Client.File.SaveBinaryDirect(context, "/Shared Documents/" + "OneGB.rar", fileStream, true);
}
}
Console.WriteLine("end " + DateTime.Now.ToLongDateString() + " " + DateTime.Now.ToLongTimeString());
}
catch (Exception ex)
{
Console.WriteLine("error -> " + ex.Message);
}
finally
{
Console.ReadLine();
}
Besides this I had to:
extend the max file upload on CA for this web application,
set on CA for this web application 'web page security Validation' on
Never (in this link there is a screen how to set it)
extend timeout on IIS
and the final result is:
sorry for the lang but I usually work in PL
all history defined here post
Install the SharePoint Online CSOM library using the command below.
Install-Package Microsoft.SharePointOnline.CSOM -Version 16.1.8924.1200
Then use the code below to upload the large file.
int blockSize = 8000000; // 8 MB
string fileName = "C:\\temp\\6GBTest.odt", uniqueFileName = String.Empty;
long fileSize;
Microsoft.SharePoint.Client.File uploadFile = null;
Guid uploadId = Guid.NewGuid();
using (ClientContext ctx = new ClientContext("siteUrl"))
{
ctx.Credentials = new SharePointOnlineCredentials("user#tenant.onmicrosoft.com", GetSecurePassword());
List docs = ctx.Web.Lists.GetByTitle("Documents");
ctx.Load(docs.RootFolder, p => p.ServerRelativeUrl);
// Use large file upload approach
ClientResult<long> bytesUploaded = null;
FileStream fs = null;
try
{
fs = System.IO.File.Open(fileName, FileMode.Open, FileAccess.Read, FileShare.ReadWrite);
fileSize = fs.Length;
uniqueFileName = System.IO.Path.GetFileName(fs.Name);
using (BinaryReader br = new BinaryReader(fs))
{
byte[] buffer = new byte[blockSize];
byte[] lastBuffer = null;
long fileoffset = 0;
long totalBytesRead = 0;
int bytesRead;
bool first = true;
bool last = false;
// Read data from filesystem in blocks
while ((bytesRead = br.Read(buffer, 0, buffer.Length)) > 0)
{
totalBytesRead = totalBytesRead + bytesRead;
// We've reached the end of the file
if (totalBytesRead <= fileSize)
{
last = true;
// Copy to a new buffer that has the correct size
lastBuffer = new byte[bytesRead];
Array.Copy(buffer, 0, lastBuffer, 0, bytesRead);
}
if (first)
{
using (MemoryStream contentStream = new MemoryStream())
{
// Add an empty file.
FileCreationInformation fileInfo = new FileCreationInformation();
fileInfo.ContentStream = contentStream;
fileInfo.Url = uniqueFileName;
fileInfo.Overwrite = true;
uploadFile = docs.RootFolder.Files.Add(fileInfo);
// Start upload by uploading the first slice.
using (MemoryStream s = new MemoryStream(buffer))
{
// Call the start upload method on the first slice
bytesUploaded = uploadFile.StartUpload(uploadId, s);
ctx.ExecuteQuery();
// fileoffset is the pointer where the next slice will be added
fileoffset = bytesUploaded.Value;
}
// we can only start the upload once
first = false;
}
}
else
{
// Get a reference to our file
uploadFile = ctx.Web.GetFileByServerRelativeUrl(docs.RootFolder.ServerRelativeUrl + System.IO.Path.AltDirectorySeparatorChar + uniqueFileName);
if (last)
{
// Is this the last slice of data?
using (MemoryStream s = new MemoryStream(lastBuffer))
{
// End sliced upload by calling FinishUpload
uploadFile = uploadFile.FinishUpload(uploadId, fileoffset, s);
ctx.ExecuteQuery();
// return the file object for the uploaded file
return uploadFile;
}
}
else
{
using (MemoryStream s = new MemoryStream(buffer))
{
// Continue sliced upload
bytesUploaded = uploadFile.ContinueUpload(uploadId, fileoffset, s);
ctx.ExecuteQuery();
// update fileoffset for the next slice
fileoffset = bytesUploaded.Value;
}
}
}
}
}
}
finally
{
if (fs != null)
{
fs.Dispose();
}
}
}
Or download the example code from GitHub.
Large file upload with CSOM
I'm looking for a way to upload 1GB file to SharePoint 2013
You can change the upload limit with the PowerShell below:
$a = [Microsoft.SharePoint.Administration.SPWebService]::ContentService
$a.ClientRequestServiceSettings.MaxReceivedMessageSize = 209715200
$a.Update()
References:
https://thuansoldier.net/4328/
https://blogs.msdn.microsoft.com/sridhara/2010/03/12/uploading-files-using-client-object-model-in-sharepoint-2010/
https://social.msdn.microsoft.com/Forums/en-US/09a41ba4-feda-4cf3-aa29-704cd92b9320/csom-microsoftsharepointclientserverexception-method-8220startupload8221-does-not-exist?forum=sharepointdevelopment
Update:
SharePoint CSOM request size is very limited, it cannot exceed a 2 MB limit and you cannot change this setting in Office 365 environment. If you have to upload bigger files you have to use REST API. Here is MSDN reference https://msdn.microsoft.com/en-us/library/office/dn292553.aspx
Also see:
https://gist.github.com/vgrem/10713514
File Upload to SharePoint 2013 using REST API
Ref: https://sharepoint.stackexchange.com/posts/149105/edit (see the 2nd answer).

Sharpcompress multi rar extract progress

I am building an app to extract from tar and rar archives. I can report progress from target based on the amount of rar containing in the target and as each one is extracted. In the rars there is one file spanning several volumes. I have used the code off the unit tests examples
var streams = testArchives.Select(s => Path.Combine(SCRATCH2_FILES_PATH, s)).Select(File.OpenRead).ToList();
using (var reader = RarReader.Open(streams))
{
while (reader.MoveToNextEntry())
{
reader.WriteEntryToDirectory(SCRATCH_FILES_PATH, new ExtractionOptions()
{
ExtractFullPath = true,
Overwrite = true
});
}
}
The problem is that the process does not report until the current entry has extracted.
Don't use WriteEntryToDirectory to save the files, because it doesn't contain callback progress, instead of that, use FileStream and then get the full size of the uncompressed file and slice the save progress ok?
Here is a simple example:
thread = new Thread(
new ThreadStart(() =>
{
using (Archive = RarArchive.Open(streams, new ReaderOptions() { Password = password, LookForHeader = true }))
{
Archive.EntryExtractionBegin += EntryExtractionBeginEvet;
Archive.CompressedBytesRead += CompressedBytesReadEvent;
FilesTotalCount = Archive.Entries.Count();
TotalSize = Archive.TotalSize;
foreach (IArchiveEntry ArchiveEntry in Archive.Entries.Where(entry => !entry.IsDirectory))
{
Directory.CreateDirectory(Path.GetDirectoryName(#"" + path + "\\" + ArchiveEntry.Key));
using (Stream archiveStream = ArchiveEntry.OpenEntryStream())
using (FileStream fileStream = new FileStream(#"" + path + "\\" + ArchiveEntry.Key, FileMode.Create, FileAccess.ReadWrite, FileShare.ReadWrite))
{
int byteSizes = 0;
byte[] buffer = new byte[bufferLenght];
while (ThreadState == ThreadState.Running && (byteSizes = archiveStream.Read(buffer, 0, buffer.Length)) > 0)
fileStream.Write(buffer, 0, byteSizes);
}
}
}
IO.CloseStreams(streams);
}
));
thread.Start();

Replicate Wave file using Naudio - Copy/Append Latest Available bytes

I have an Active wave recording wave-file.wav happening to the Source folder.
I need to replicate this file to Destination folder with a new name wave-file-copy.wav.
The recording and replication should happen in parallel.
I have implemented a scheduled job, which will run in every 10 minutes and copy the source file to destination.
private static void CopyWaveFile(string destinationFile, string sourceFile){
using (var fs = File.Open(sourceFile, FileMode.Open, FileAccess.Read, FileShare.ReadWrite)){
using (var reader = new WaveFileReader(fs)){
using (var writer = new WaveFileWriter(destinationFile, reader.WaveFormat)){
reader.Position = 0;
var endPos = (int)reader.Length;
var buffer = new byte[1024];
while (reader.Position < endPos){
var bytesRequired = (int)(endPos - reader.Position);
if (bytesRequired <= 0) continue;
var bytesToRead = Math.Min(bytesRequired, buffer.Length);
var bytesRead = reader.Read(buffer, 0, bytesToRead);
if (bytesRead > 0){
writer.Write(buffer, 0, bytesRead);
}
}
}
}
}
}
The copy operation is working fine, even though the source file is being updated continuously.
Time taken for the copy operation is increasing in linear time, because i am copying the entire file every time.
I am trying to implement a new function ConcatenateWavFiles(), which should update the content of destination file, with the latest available bytes of source recording.
I have tried few sample codes - the approach i am using is :
Read destination file meta info, and get the length.
Set the length of destination file as reader.Position of source file waveReader
Read the source file till end, starting from position.
public static void ConcatenateWavFiles(string destinationFile, string sourceFile){
WaveFileWriter waveFileWriter = null;
var sourceReadOffset = GetWaveFileSize(destinationFile);
try{
using (var fs = File.Open(sourceFile, FileMode.Open, FileAccess.Read, FileShare.ReadWrite))
{
using (var reader = new WaveFileReader(fs))
{
waveFileWriter = new WaveFileWriter(destinationFile, reader.WaveFormat);
if (!reader.WaveFormat.Equals(waveFileWriter.WaveFormat)){
throw new InvalidOperationException(
"Can't append WAV Files that don't share the same format");
}
var startPos = sourceReadOffset - sourceReadOffset % reader.WaveFormat.BlockAlign;
var endPos = (int) reader.Length;
reader.Position = startPos;
var bytesRequired = (int)(endPos - reader.Position);
var buffer = new byte[bytesRequired];
if (bytesRequired > 0)
{
var bytesToRead = Math.Min(bytesRequired, buffer.Length);
var bytesRead = reader.Read(buffer, 0, bytesToRead);
if (bytesRead > 0)
{
waveFileWriter.Write(buffer, startPos, bytesRead);
}
}
}
}
}
finally{
if (waveFileWriter != null){
waveFileWriter.Dispose();
}
}
}
I was able to get the new content.
Is it possible to append the latest content to existing destination file?
If possible what am I doing wrong in the code?
My code throws the following exception - Offset and length were out of bounds for the array or count is greater than the number of elements from index to the end of the source collection.
I couldn't find a solution to wave audio file replication with NAudio Library.
But I have implemented a solution using C# MemoryStreams and FileStreams.
Copy the Source file to destination, if destination file doesn't exist.
Append all the latest bytes (recorded after the last operation) to the destination file.
Modify the Wave File Header to to reflect the last appended bytes. (Else the duration of the file will not be updated, only the file size will increase.
Repeat this append operation in regular intervals.
public void ReplicateFile(string destinationFile, string sourceFile){
if (!Directory.Exists(GetRoutePathFromFile(sourceFile)))
return;
if (!File.Exists(sourceFile))
return;
if (Directory.Exists(GetRoutePathFromFile(destinationFile))){
if (File.Exists(destinationFile)){
UpdateLatestWaveFileContent(destinationFile, sourceFile);
}else{
CopyWaveFile(destinationFile, sourceFile);
}
}else{
Directory.CreateDirectory(GetRoutePathFromFile(destinationFile));
CopyWaveFile(destinationFile, sourceFile);
}
}
private static string GetRoutePathFromFile(string file){
var rootPath = Directory.GetParent(file);
return rootPath.FullName;
}
private static void CopyWaveFile(string destination, string source){
var sourceMemoryStream = new MemoryStream();
using (var fs = File.Open(source, FileMode.Open, FileAccess.Read, FileShare.ReadWrite)){
fs.CopyTo(sourceMemoryStream);
}
using (var fs = new FileStream(destination, FileMode.CreateNew, FileAccess.ReadWrite, FileShare.ReadWrite)){
sourceMemoryStream.WriteTo(fs);
}
}
private static void UpdateLatestWaveFileContent(string destinationFile, string sourceFile){
var sourceMemoryStream = new MemoryStream();
long offset = 0;
using (var fs = File.Open(destinationFile, FileMode.Open, FileAccess.Read, FileShare.ReadWrite)){
offset = fs.Length;
}
using (var fs = File.Open(sourceFile, FileMode.Open, FileAccess.Read, FileShare.ReadWrite)){
fs.CopyTo(sourceMemoryStream);
}
var length = sourceMemoryStream.Length - offset;
var buffer = sourceMemoryStream.GetBuffer();
using (var fs = new FileStream(destinationFile, FileMode.Append, FileAccess.Write, FileShare.ReadWrite)){
fs.Write(buffer, (int)offset, (int)length);
}
var bytes = new byte[45];
for (var i = 0; i < 45; i++){
bytes[i] = buffer[i];
}
ModifyHeaderDataLength(destinationFile, 0, bytes);
}
private static void ModifyHeaderDataLength(string filename, int position, byte[] data){
using (Stream stream = File.Open(filename, FileMode.OpenOrCreate, FileAccess.Write, FileShare.ReadWrite))
{
stream.Position = position;
stream.Write(data, 0, data.Length);
}
}
Try Reading the Source File one or two Wav blocks prior to the actual end of the source file.
The case could be that the code is judging the end of the source file too close for comfort.

How do I zip files in Xamarin for Android?

I have a function that creates a zip file a string array of files passed. The function does succeed in creating the zip file and the zip entry files inside it, but these zip entry files are empty. I've tried a couple of different methods - the function code below is the closest I've gotten to something working:
public static bool ZipFile(string[] arrFiles, string sZipToDirectory, string sZipFileName)
{
if (Directory.Exists(sZipToDirectory))
{
FileStream fNewZipFileStream;
ZipOutputStream zos;
try {
fNewZipFileStream = File.Create(sZipToDirectory + sZipFileName);
zos = new ZipOutputStream(fNewZipFileStream);
for (int i = 0; i < arrFiles.Length; i++) {
ZipEntry entry = new ZipEntry(arrFiles[i].Substring(arrFiles[i].LastIndexOf("/") + 1));
zos.PutNextEntry(entry);
FileStream fStream = File.OpenRead(arrFiles[i]);
BufferedStream bfStrm = new BufferedStream(fStream);
byte[] buffer = new byte[bfStrm.Length];
int count;
while ((count = bfStrm.Read(buffer, 0, 1024)) != -1) {
zos.Write(buffer);
}
bfStrm.Close();
fStream.Close();
zos.CloseEntry();
}
zos.Close();
fNewZipFileStream.Close();
return true;
}
catch (Exception ex)
{
string sErr = ex.Message;
return false;
}
finally
{
fNewZipFileStream = null;
zos = null;
}
}
else
{
return false;
}
}
I think it's got to do with the byte stream handling. I've tried this bit of code that handles the stream but it goes into an infinite loop:
while ((count = fStream.Read(buffer, 0, 1024)) != -1) {
zos.Write(buffer, 0, count);
}
fStream.Close();
I found a solution that is quite simple - I used the ReadAllBytes method of the static File class.
ZipEntry entry = new ZipEntry(arrFiles[i].Substring(arrFiles[i].LastIndexOf("/") + 1));
zos.PutNextEntry(entry);
byte[] fileContents = File.ReadAllBytes(arrFiles[i]);
zos.Write(fileContents);
zos.CloseEntry();
Using Read() on a FileStream returns the amount of bytes read into the stream or 0 if the end of the stream has been reached. It will never return a value of -1.
From MSDN:
The total number of bytes read into the buffer. This might be less than the number of bytes requested if that number of bytes are not currently available, orzero if the end of the stream is reached.
I'd modify your code to the following:
System.IO.FileStream fos = new System.IO.FileStream(sZipToDirectory + sZipFileName, FileMode.Create);
Java.Util.Zip.ZipOutputStream zos = new Java.Util.Zip.ZipOutputStream(fos);
byte[] buffer = new byte[1024];
for (int i = 0; i < arrFiles.Length; i++) {
FileInfo fi = new FileInfo (arrFiles[i]);
Java.IO.FileInputStream fis = new Java.IO.FileInputStream(fi.FullName);
ZipEntry entry = new ZipEntry(arrFiles[i].Substring(arrFiles[i].LastIndexOf("/") + 1));
zos.PutNextEntry(entry);
int count = 0;
while ((count = fis.Read(buffer)) > 0) {
zos.Write(buffer, 0, count);
}
fis.Close();
zos.CloseEntry();
}
This is nearly identical to the code I've used for creating zip archives on Android in the past.
Are you allowed to use SharpZip? It's really easy to use.
Here is a blog post I wrote to extract zip files
private static void upzip(string url)
{
WebClient wc = new WebClient();
wc.DownloadFile(url, "temp.zip");
//unzip
ZipFile zf = null;
try
{
zf = new ZipFile(File.OpenRead("temp.zip"));
foreach (ZipEntry zipEntry in zf)
{
string fileName = zipEntry.Name;
byte[] buffer = new byte[4096];
Stream zipStream = zf.GetInputStream(zipEntry);
using (FileStream streamWriter = File.Create( fileName))
{
StreamUtils.Copy(zipStream, streamWriter, buffer);
}
}
}
finally
{
if (zf != null)
{
zf.IsStreamOwner = true;
zf.Close();
}
}
}
private void ZipFolder(string[] _files, string zipFileName)
{
using var memoryStream = new MemoryStream();
using (var archive = new ZipArchive(memoryStream, ZipArchiveMode.Create, true))
{
foreach (var item in _files)
{
var demoFile = archive.CreateEntry(Path.GetFileName(item));
using var readStreamW = File.OpenRead(item);
using (var entryStream = demoFile.Open())
{
using (var streamWriter = new StreamWriter(entryStream))
{
readStreamW.Seek(0, SeekOrigin.Begin);
readStreamW.CopyTo(streamWriter.BaseStream);
}
}
}
}
using var fileStream = new FileStream(zipFileName, FileMode.Create);
memoryStream.Seek(0, SeekOrigin.Begin);
memoryStream.CopyTo(fileStream);
}

FileStream.WriteLine() is not writing to file

I am trying to make a simple software which stores data in a TXT log file.
This is my code
FileStream fs = null;
StreamWriter fw = null;
try
{
fs= new FileStream(Environment.GetFolderPath(Environment.SpecialFolder.Desktop)+"/textme.txt", FileMode.OpenOrCreate, FileAccess.ReadWrite);
fw = new StreamWriter(fs);
fw.Write("sadadasdsadsadsadas");
for (int i = 0; i < AnimalShelter.AnimalList.Count; i++)
{
fw.WriteLine("<chipNr>" + AnimalShelter.AnimalList[i].ChipRegistrationNumber + "<chipNr>");
Console.WriteLine("<chipNr>" + AnimalShelter.AnimalList[i].ChipRegistrationNumber + "<chipNr>");
}
}
catch(IOException)
{
MessageBox.Show("ERROR THROWN");
}
finally
{
if (fs!= null) fs.Close();
// if (fw != null) fw.Close();
}
What I achieved is: the file gets created, but nothing gets written in it.
I checked a lot of posts but I could not find any particular help.
Adding a call to Flush the stream works. This is because you are wrapping the FileStream. StreamWriter will write to the FileStream, but you need to indicate when to send the Stream to the actual file. Also, you can exchange your try finally with a using:
try
{
using (var fs = new FileStream(Environment.GetFolderPath(Environment.SpecialFolder.Desktop)+"/textme.txt", FileMode.OpenOrCreate, FileAccess.ReadWrite))
{
using (var fw = new StreamWriter(fs))
{
fw.Write("sadadasdsadsadsadas");
for (int i = 0; i < AnimalShelter.AnimalList.Count; i++)
{
fw.WriteLine("<chipNr>" + AnimalShelter.AnimalList[i].ChipRegistrationNumber + "<chipNr>");
Console.WriteLine("<chipNr>" + AnimalShelter.AnimalList[i].ChipRegistrationNumber + "<chipNr>");
}
fw.Flush(); // Added
}
}
}
catch(IOException)
{
MessageBox.Show("ERROR THROWN");
}
Enclose your StreamWriter in an using block to be sure that everything is correctly closed at the end of the file usage, also I don't think you need to create a FileStream for this to work.
try
{
string fileName = Path.Combine(Environment.GetFolderPath(Environment.SpecialFolder.Desktop), "textme.txt")
using(fw = new StreamWriter(fileName, true))
{
......
}
}
catch(IOException)
{
MessageBox.Show("ERROR THROWN");
}
Note that the StreamWriter has a constructor that accepts two parameters, the name of the file to create/open and a flag to indicate that the file should be opened in append mode or overwritten
See StreamWriter docs
Always use using (as mentioned already) and you won't run into problems (or have to think about it)...
using (FileStream fs = new FileStream(Environment.GetFolderPath(Environment.SpecialFolder.Desktop) + "/textme.txt", FileMode.OpenOrCreate, FileAccess.ReadWrite))
using (StreamWriter fw = new StreamWriter(fs))
{
fw2.Write("sadadasdsadsadsadas");
}
(also you could have closed the writer instead of filestream which should've worked)
The problem is as I far as I can tell...
FileStream.Close is actually Stream.Close - and that calls Dispose but it ain't virtual, so does some general cleanup.
FileStream.Dispose which is called implicitly when you use using - does specific Flush and then Close/Dispose - so does proper specific cleanup.
You can avoid any of that via using as that is generally recommended pattern (and frankly never got me into any of these)
Indeed, Flush() is the answer; however, I would use File.WriteAllLines() instead.
try
{
var fileName = Environment.GetFolderPath(Environment.SpecialFolder.Desktop)+"/textme.txt";
var lines = AnimalShelter.AnimalList.Select(o=> "<chipNr>" + o.ChipRegistrationNumber + "</chipNr>");
File.WriteAllLines(fileName, lines);
foreach(var line in lines)
Console.WriteLine(line);
}
catch(IOException)
{
MessageBox.Show("ERROR THROWN");
}
Try using this - just replace the array:
try
{
using (Stream fs = new FileStream(Environment.GetFolderPath(Environment.SpecialFolder.Desktop) + "/textme.txt", FileMode.OpenOrCreate, FileAccess.ReadWrite))
{
using (StreamWriter sw = new StreamWriter(fs))
{
int[] test = new int[] { 0, 12, 23, 46 };
sw.Write("sadadasdsadsadsadas");
for (int i = 0; i < test.Length; i++)
{
sw.WriteLine("<chipNr>" + test[i] + "<chipNr>");
Console.WriteLine("<chipNr>" + test[i] + "<chipNr>");
}
sw.Close();
}
fs.Close();
}
}
catch (IOException)
{
MessageBox.Show("ERROR THROWN");
}

Categories

Resources