RAM doesn't increase to normal value after downloading a file - c#

I made a server and client (As training). They both communicate perfect. The only problem that appears comes when Iam uploading a file to my server from my client. When the server is downloading my servers-RAM goes up around 860MB (--> Downloading a 299MByte file). I noticed that everytime a client has finished uploading my RAM doesnt go back to normal value. Instead it is adding to the current ram value of my server (2 clients finished uploading --> server ram goes over 1300MB)
Here is my Code from my server where it is reciving the file
private void DownloadFileFromClient(string path, string nameAndExtension, TcpClient client)
{
string currFileName = "Download";
string savingPath = path;
if (File.Exists(savingPath)) //Wenn der Pfad einen Namen für die Datei enthält
{
FileInfo fi = new FileInfo(savingPath);
currFileName = fi.Name;
}
else if (Directory.Exists(savingPath)) //Wenn der Pfad zu einem Ordner führt
{
int counter = 0; //zählt die doppelten Datein nachoben
foreach (string file in Directory.GetFiles(savingPath))
{
FileInfo fi = new FileInfo(file);
string[] name = fi.Name.ToString().Split('.');
if (name[0].ToUpperInvariant() == currFileName.ToUpperInvariant()) //Wenn schon eine Datei mit dem selben Namen vorhanden ist
{
counter++;
currFileName = "download" + counter.ToString();
}
}
savingPath = savingPath + "\\" + currFileName + "." + nameAndExtension;
}
using (NetworkStream stream = client.GetStream())
{
BinaryFormatter bf = new BinaryFormatter();
object op;
op = bf.Deserialize(stream); // Deserialize the Object from Stream
BinaryReader br = new BinaryReader(stream);
byte[] buffer = br.ReadBytes(MaxDownloadBytes); //Maximale Dateiengröße in Bytes 10MB = 10485760Bytes | 50MB = 52428800Bytes | 100MB = 104857600Bytes | 500MB = 524288000Bytes
br.Dispose();
br.Close();
using (FileStream filestream = new FileStream(savingPath, FileMode.CreateNew, FileAccess.Write))
{
filestream.Write(buffer, 0, buffer.Length);
}
}
LogMessage("Succefully downloaded file from Client", client.GetHashCode().ToString(), "Manuel", "test", client);
}
I tryied to dispose all, but it didnt help alot. Thanks for any answer!

I didn't necessarily see anything wrong, but there could be some weirdness with all of the streams and reading you have going on that aren't necessarily useful. Below is a tweak to what you have to remove what appears to be unnecessary code / stream reading.
using (NetworkStream stream = client.GetStream())
{
using(BinaryReader br = new BinaryReader(stream))
{
byte[] buffer = br.ReadBytes(MaxDownloadBytes);
using (FileStream filestream = new FileStream(savingPath, FileMode.CreateNew, FileAccess.Write))
{
filestream.Write(buffer, 0, buffer.Length);
}
}
}
I would highly recommend doing this in chunks. Doing that will allow you to remove the necessity for BinaryReader. You can see an example of what just using the NetworkStream looks like here. It would also make your download consume much less memory because you aren't downloading the entire file into memory before you write it.

I solved it...almost. I just added GC.Collect(); after filestream gets closed.
(codesource from TyCobb, thanks)
using (NetworkStream stream = client.GetStream())
{
using(BinaryReader br = new BinaryReader(stream))
{
byte[] buffer = br.ReadBytes(MaxDownloadBytes);
using (FileStream filestream = new FileStream(savingPath, FileMode.CreateNew, FileAccess.Write))
{
filestream.Write(buffer, 0, buffer.Length);
}
GC.Collect();
}
}
But somehow after the 299MB Download the RAM is up to 300MB, but doesnt change if iam downloading another file.

Related

How to Compress Large Files C#

I am using this method to compress files and it works great until I get to a file that is 2.4 GB then it gives me an overflow error:
void CompressThis (string inFile, string compressedFileName)
{
FileStream sourceFile = File.OpenRead(inFile);
FileStream destinationFile = File.Create(compressedFileName);
byte[] buffer = new byte[sourceFile.Length];
sourceFile.Read(buffer, 0, buffer.Length);
using (GZipStream output = new GZipStream(destinationFile,
CompressionMode.Compress))
{
output.Write(buffer, 0, buffer.Length);
}
// Close the files.
sourceFile.Close();
destinationFile.Close();
}
What can I do to compress huge files?
You should not to write the whole file to into the memory. Use Stream.CopyTo instead. This method reads the bytes from the current stream and writes them to another stream using a specified buffer size (81920 bytes by default).
Also you don't need to close Stream objects if use using keyword.
void CompressThis (string inFile, string compressedFileName)
{
using (FileStream sourceFile = File.OpenRead(inFile))
using (FileStream destinationFile = File.Create(compressedFileName))
using (GZipStream output = new GZipStream(destinationFile, CompressionMode.Compress))
{
sourceFile.CopyTo(output);
}
}
You can find a more complete example on Microsoft Docs (formerly MSDN).
You're trying to allocate all of this into memory. That just isn't necessary, you can feed the input stream directly into the output stream.
Alternative solution for zip format without allocating memory -
using (var sourceFileStream = new FileStream(this.GetFilePath(sourceFileName), FileMode.Open))
{
using (var destinationStream =
new FileStream(this.GetFilePath(zipFileName), FileMode.Create, FileAccess.ReadWrite))
{
using (var archive = new ZipArchive(destinationStream, ZipArchiveMode.Create, true))
{
var file = archive.CreateEntry(sourceFileName, CompressionLevel.Optimal);
using (var entryStream = file.Open())
{
var fileStream = sourceFileStream;
await fileStream.CopyTo(entryStream);
}
}
}
}
The solution will write directly from input stream to output stream

GZipStream complains magic number in header is not correct

I'm attempting to use National Weather Service (U.S.) data, but something has changed recently and the GZip file no longer opens.
.NET 4.5 complains that...
Message=The magic number in GZip header is not correct. Make sure you are passing in a GZip stream.
Source=System
StackTrace:
at System.IO.Compression.GZipDecoder.ReadHeader(InputBuffer input)
at System.IO.Compression.Inflater.Decode()
at System.IO.Compression.Inflater.Inflate(Byte[] bytes, Int32 offset, Int32 length)
at System.IO.Compression.DeflateStream.Read(Byte[] array, Int32 offset, Int32 count)
I don't understand what has changed, but this is becoming a real show-stopper. Can anyone with GZip format experience tell me what has changed to make this stop working?
A file that works:
http://www.srh.noaa.gov/ridge2/Precip/qpehourlyshape/2015/201504/20150404/nws_precip_2015040420.tar.gz
A file that doesn't work:
http://www.srh.noaa.gov/ridge2/Precip/qpehourlyshape/2015/201505/20150505/nws_precip_2015050505.tar.gz
Update with sample code
const string url = "http://www.srh.noaa.gov/ridge2/Precip/qpehourlyshape/2015/201505/20150505/nws_precip_2015050505.tar.gz";
string appPath = Path.GetDirectoryName(Assembly.GetExecutingAssembly().Location);
string downloadPath = Path.Combine(appPath, Path.GetDirectoryName(Assembly.GetExecutingAssembly().Location), "nws_precip_2015050505.tar.gz");
using (var wc = new WebClient())
{
wc.DownloadFile(url, downloadPath);
}
string extractDirPath = Path.Combine(appPath, "Extracted");
if (!Directory.Exists(extractDirPath))
{
Directory.CreateDirectory(extractDirPath);
}
string extractFilePath = Path.Combine(extractDirPath, "nws_precip_2015050505.tar");
using (var fsIn = new FileStream(downloadPath, FileMode.Open, FileAccess.Read))
using (var fsOut = new FileStream(extractFilePath, FileMode.Create, FileAccess.Write))
using (var gz = new GZipStream(fsIn, CompressionMode.Decompress, true))
{
gz.CopyTo(fsOut);
}
It appears that this service SOMETIMES returns tar format files disguised as .tar.gz. This is very confusing, but if you check that the first two bytes are 0x1F and 0x8B, you can detect if the file is a GZip by checking its magic numbers manually.
using (FileStream fs = new FileStream(downloadPath, FileMode.Open, FileAccess.Read))
{
byte[] buffer = new byte[2];
fs.Read(buffer, 0, buffer.Length);
if (buffer[0] == 0x1F
&& buffer[1] == 0x8B)
{
// It's probably a GZip file
}
else
{
// It's probably not a GZip file
}
}
[Resolved] GZipStream complains magic number in header is not correct
//Exception magic number tar.gz file
Migcal error cause
File is not compress into tar.gz properly
File size is too big , above 1+GB
Solution over it
use .net framework 4.5.1 to over the this exception //OR//
manupulate the exsiting solution without change .net framework.
Please follow the step for implementation.
Remane abc.tar.gz as abc (remove extension).
pass this file and directly name to compress function
*public static void Compress(DirectoryInfo directorySelected, string directoryPath)
{
foreach (FileInfo fileToCompress in directorySelected.GetFiles())
{
using (FileStream originalFileStream = fileToCompress.OpenRead())
{
if ((File.GetAttributes(fileToCompress.FullName) &
FileAttributes.Hidden) != FileAttributes.Hidden & fileToCompress.Extension != ".tar.gz")
{
using (FileStream compressedFileStream = File.Create(fileToCompress.FullName + ".tar.gz"))
{
using (System.IO.Compression.GZipStream compressionStream = new System.IO.Compression.GZipStream(compressedFileStream,
System.IO.Compression.CompressionMode.Compress))
{
originalFileStream.CopyTo(compressionStream);
}
}
FileInfo info = new FileInfo(directoryPath + "\\" + fileToCompress.Name + ".tar.gz");
}
}
}
}
3. implement this code in following exception handler try catch block
try
{
TarGzFilePath=#"c:\temp\abc.tar.gz";
FileStream streams = File.OpenRead(TarGzFilePath);
string FileName=string.Empty;
GZipInputStream tarGz = new GZipInputStream(streams);
TarInputStream tar = new TarInputStream(tarGz);
// exception will occured in below lines should apply try catch
TarEntry ze;
try
{
ze = tar.GetNextEntry();// exception occured here "magical number"
}
catch (Exception extra)
{
tar.Close();
tarGz.Close();
streams.Close();
//please close all above , other wise it will come with exception "tihs process use by another process"
//rename your file *for better accuracy you can copy file to other location
File.Move(#"c:\temp\abc.tar.gz", #"c:\temp\abc"); // rename file
DirectoryInfo directorySelected = new DirectoryInfo(Path.GetDirectoryName(#"c:\temp\abc"));
Compress(directorySelected, directoryPath); // directorySelected=c:\temp\abc , directoryPath=c:\temp\abc.tar.gz // process in step 2 function
streams = File.OpenRead(TarGzFilePath);
tarGz = new GZipInputStream(streams);
tar = new TarInputStream(tarGz);
ze = tar.GetNextEntry();
}
// do anything with extraction with your code
}
catch (exception ex)
{
tar.Close();
tarGz.Close();
streams.Close();
}

Upload large file via Webservice

My company run an application that have to archive many kinds of files into some distants servers. The application works well but can't handle files larger than 1GB.
Here is the current function use to load the files to be uploaded to the distant server :
FileStream fs = File.OpenRead(fileToUploadPath);
byte[] fileArray = new byte[fs.Length];
fs.Read(fileArray, 0, fs.Length);
The byte array (when loaded successfully) was then splited into 100Mb bytes arrays and sent to the local server (using some WSDL web services) with the following function :
localServerWebService.SendData(subFileArray, filename);
I changed the function responsible for the file reading to use BufferendStream and I also wanted to improve the Webservice part so that it doesn't have to create a new stream at each call. I thought of somethings like this :
FileInfo source = new FileInfo(fileName);
using (FileStream reader = File.OpenRead(fileName))
{
using (FileStream distantWriter = localServerWebService.CreateWriteFileStream(fileName))
{
using (BufferedStream buffReader = new BufferedStream(reader))
{
using (BufferedStream buffWriter = new BufferedStream(distantWriter))
{
byte[] buffer = new byte[BUFFER_SIZE];
int bytesRead = 0;
long bytesToRead = source.Length;
while (bytesToRead > 0)
{
int nbBytesRead = buffReader.Read(buffer, 0, buffer.Length);
buffWriter.Write(buffer, 0, nbBytesRead);
bytesRead += nbBytesRead;
bytesToRead -= nbBytesRead;
}
}
}
}
}
But this code can't compile and always give me the error Cannot convert MyNameSpace.FileStream into System.IO.FileStream at line using (FileStream distantWriter = localServerWebService.CreateWriteFileStream(fileName)). I can't cast MyNameSpace.FileStream into System.IO.FileStream either.
The web service method :
[WebMethod]
public FileStream CreateWriteFileStream(String fileName)
{
String RepVaultUP =
System.Configuration.ConfigurationSettings.AppSettings.Get("SAS_Upload");
String desFile = Path.Combine(RepVaultUP, fileName);
return File.Open(desFile, FileMode.Create, FileAccess.Write);
}
So can you guys please explain to me why is this not working?
P.S.: English is not my mothertong so I hope what i wrote is clearly undestandable.

Amazon S3 Save Response Stream

I am trying to load a .gz file out of a bucket.
Connection and authentication work finde, I even do get a file, but the problem is, the file is a lot bigger then the file should be. it is, original size, 155MB within the bucket but when it comes onto my hard disk it gets up to about 288MB
here is the function code:
public bool SaveBucketToFile(string Filename)
{
//Response check into file
using (StreamReader StRead = new StreamReader(_ObjResponse.ResponseStream))
{
string TempFile = Path.GetTempFileName();
StreamWriter StWrite = new StreamWriter(TempFile, false);
StWrite.Write(StRead.ReadToEnd());
StWrite.Close();
StRead.Close();
// Move to real destination
if (File.Exists(Filename))
{
File.Delete(Filename);
}
File.Move(TempFile, Filename);
}
return true;
}
the download and filling of _ObjResponse is made over usage of the AmazonS3 Client from their SDK. I am using a proxy but the same code on a different machine without proxy brings back the same result.
Any hints what to do here? the object request is simple:
_ObjRequest = new GetObjectRequest
{
BucketName = BucketName,
Key = Key
};
glad for any help...
for everyone to stumble upon this.
I needed to first save the stream via bufferedStream into a memorystream.
the code looks like this:
MemoryStream MemStream = new MemoryStream();
BufferedStream Stream2 = new BufferedStream(_ObjResponse.ResponseStream);
byte[] Buffer = new byte[0x2000];
int Count;
while ((Count = Stream2.Read(Buffer, 0, Buffer.Length)) > 0)
{
MemStream.Write(Buffer, 0, Count);
}
// Pfad auslesen
string TempFile = Path.GetTempFileName();
//Stream zum Tempfile öffnen
FileStream Newfile = new FileStream(TempFile,FileMode.Create);
//Stream wieder auf Position 0 ziehen
MemStream.Position = 0;
// in Tempdatei speichern
MemStream.CopyTo(Newfile);
Newfile.Close();
// Endgültigen Speicherpunkt prüfen und Tempdatei dorthin schieben
if (File.Exists(Filename))
{
File.Delete(Filename);
}
File.Move(TempFile, Filename);
I found this somewhere here:
http://www.codeproject.com/Articles/186132/Beginning-with-Amazon-S under the Caption "Get a file from Amazon S3"

System.OutofMemoryException throw while doing GZipStream Compression

I am working in win forms. Getting errors while doing following operation.
It shows me System.OutOfMemoryException error when i try to run the operation around 2-3 times continuously. Seems .NET is not able to free the resouces used in operation. The file i am using for operation is quite big, around more than 500 MB.
My sample code is as below. Please help me how to resolve the error.
try
{
using (FileStream target = new FileStream(strCompressedFileName, FileMode.Create, FileAccess.Write))
using (GZipStream alg = new GZipStream(target, CompressionMode.Compress))
{
byte[] data = File.ReadAllBytes(strFileToBeCompressed);
alg.Write(data, 0, data.Length);
alg.Flush();
data = null;
}
}
catch (Exception ex)
{
MessageBox.Show(ex.ToString());
}
Replace ReadAllBytes with Stream.CopyTo
using (FileStream target = new FileStream(strCompressedFileName, FileMode.Create, FileAccess.Write))
using (GZipStream alg = new GZipStream(target, CompressionMode.Compress))
{
using (var fileToRead = File.Open(.....))
{
fileToRead.CopyTo(alg);
}
}
A very rough example could be
// destFile - FileStream for destinationFile
// srcFile - FileStream of sourceFile
using (GZipStream gz = new GZipStream(destFile, CompressionMode.Compress))
{
byte[] src = new byte[1024];
int count = sourceFile.Read(src, 0, 1024);
while (count != 0)
{
gz.Write(src, 0, count );
count = sourceFile.Read(src, 0, 1024);
}
}
// flush, close, dispose ..
So basically I changed your ReadAllBytes to read only chunks of 1024 bytes.
You can try to use this method to compress file MSDN link
public static void Compress(FileInfo fileToCompress)
{
using (FileStream originalFileStream = fileToCompress.OpenRead())
{
using (FileStream compressedFileStream = File.Create(fileToCompress.FullName + ".gz"))
{
using (GZipStream compressionStream = new GZipStream(compressedFileStream, CompressionMode.Compress))
{
originalFileStream.CopyTo(compressionStream);
}
}
}
}
usage:
string directoryPath = #"c:\users\public\reports";
DirectoryInfo directorySelected = new DirectoryInfo(directoryPath);
foreach (FileInfo fileToCompress in directorySelected.GetFiles())
{
Compress(fileToCompress);
}

Categories

Resources