Sharpcompress multi rar extract progress - c#

I am building an app to extract from tar and rar archives. I can report progress from target based on the amount of rar containing in the target and as each one is extracted. In the rars there is one file spanning several volumes. I have used the code off the unit tests examples
var streams = testArchives.Select(s => Path.Combine(SCRATCH2_FILES_PATH, s)).Select(File.OpenRead).ToList();
using (var reader = RarReader.Open(streams))
{
while (reader.MoveToNextEntry())
{
reader.WriteEntryToDirectory(SCRATCH_FILES_PATH, new ExtractionOptions()
{
ExtractFullPath = true,
Overwrite = true
});
}
}
The problem is that the process does not report until the current entry has extracted.

Don't use WriteEntryToDirectory to save the files, because it doesn't contain callback progress, instead of that, use FileStream and then get the full size of the uncompressed file and slice the save progress ok?
Here is a simple example:
thread = new Thread(
new ThreadStart(() =>
{
using (Archive = RarArchive.Open(streams, new ReaderOptions() { Password = password, LookForHeader = true }))
{
Archive.EntryExtractionBegin += EntryExtractionBeginEvet;
Archive.CompressedBytesRead += CompressedBytesReadEvent;
FilesTotalCount = Archive.Entries.Count();
TotalSize = Archive.TotalSize;
foreach (IArchiveEntry ArchiveEntry in Archive.Entries.Where(entry => !entry.IsDirectory))
{
Directory.CreateDirectory(Path.GetDirectoryName(#"" + path + "\\" + ArchiveEntry.Key));
using (Stream archiveStream = ArchiveEntry.OpenEntryStream())
using (FileStream fileStream = new FileStream(#"" + path + "\\" + ArchiveEntry.Key, FileMode.Create, FileAccess.ReadWrite, FileShare.ReadWrite))
{
int byteSizes = 0;
byte[] buffer = new byte[bufferLenght];
while (ThreadState == ThreadState.Running && (byteSizes = archiveStream.Read(buffer, 0, buffer.Length)) > 0)
fileStream.Write(buffer, 0, byteSizes);
}
}
}
IO.CloseStreams(streams);
}
));
thread.Start();

Related

How to add existing file (image/mp4/pdf) to a ziparchive? - .Net Framework 4.6.2

I am trying to implement a "Download All" button that will zip up a selection of files from the server and return them as a zip file download. With the code below, I have the zip file being created. The expected files are inside, with the filenames expected, but the contents of the zipped files appears to be corrupted.
public ActionResult DownloadAll(Guid id)
{
var assets = db.InviteAssets.Include(i => i.AssetPages).Where(w => w.InviteID == id).ToList();
var cd = new System.Net.Mime.ContentDisposition
{
// for example foo.bak
FileName = "allAssets.zip",
// always prompt the user for downloading, set to true if you want
// the browser to try to show the file inline
Inline = false,
};
Response.AppendHeader("Content-Disposition", cd.ToString());
using (var memoryStream = new MemoryStream())
{
using (var archive = new ZipArchive(memoryStream, ZipArchiveMode.Create, true))
{
foreach (var asset in assets)
{
string path, extension, name;
if (asset.AssetType != AssetType.PDF)
{
path = asset.AssetPages.First(f => f.PageNumber == 1).FilePath;
}
else
{
path = string.Format("/Content/Assets/asset_{0}.pdf", asset.ID);
}
extension = path.Substring(path.IndexOf('.'));
name = "asset" + asset.Order + extension;
var file = archive.CreateEntry(name);
using (var streamWriter = new StreamWriter(file.Open()))
{
using (var fileStream = System.IO.File.Open(Server.MapPath("~" + path), FileMode.Open))
{
int filelength = (int)fileStream.Length;
var filedata = new byte[fileStream.Length];
streamWriter.Write(fileStream.Read(filedata, 0, filelength));
}
}
}
}
return File(memoryStream.ToArray(), "application/json", "allAssets.zip");
}
}
I'm thinking my issue is therefore with this section:
using (var streamWriter = new StreamWriter(file.Open()))
{
using (var fileStream = System.IO.File.Open(Server.MapPath("~" + path), FileMode.Open))
{
int filelength = (int)fileStream.Length;
var filedata = new byte[fileStream.Length];
streamWriter.Write(fileStream.Read(filedata, 0, filelength));
}
}
I keep reading examples that use a method archive.CreateEntryFromFile(filePath, fileName) but no such method is recognised. Has this been deprecated, or requires a higher version of .Net Framework?
Thanks in advance.
The problem is here:
streamWriter.Write(fileStream.Read(filedata, 0, filelength));
You’re reading the file contents into filedata but you’re at the same time writing the return value of Read into the archive, meaning a single int. You need to read and write separately:
fileStream.Read(filedata, 0, filelength));
streamWriter.Write(filedata, 0, filelength);
Or you can use the CreateEntryFromFile extension method in System.IO.Compression.ZipFileExtensions namespace.
I discovered that the reason I couldn't see the CreateEntryFromFile method was because I had not included a reference to System.IO.Compression.FileSystem. Once I added that, I could use CreateEntryFromFile which worked fine.
So now I have: archive.CreateEntryFromFile(Server.MapPath("~" + path), name);
Instead of:
var file = archive.CreateEntry(name);
using (var streamWriter = new StreamWriter(file.Open()))
{
using (var fileStream = System.IO.File.Open(Server.MapPath("~" + path), FileMode.Open))
{
int filelength = (int)fileStream.Length;
var filedata = new byte[fileStream.Length];
fileStream.Read(filedata, 0, filelength);
streamWriter.Write(filedata);
}
}

Upload file chunks to SPS 2013 - Method "StartUpload" does not exist at line

I am trying to upload a large file (1 GB) from code to SharePoint 2013 on prem. I followed this tutorial, I dowloaded from NuGet the package "Microsoft.SharePointOnline.CSOM" and tried this piece of code:
public Microsoft.SharePoint.Client.File UploadFileSlicePerSlice(ClientContext ctx, string libraryName, string fileName, int fileChunkSizeInMB = 3)
{
// Each sliced upload requires a unique ID.
Guid uploadId = Guid.NewGuid();
// Get the name of the file.
string uniqueFileName = Path.GetFileName(fileName);
// Ensure that target library exists, and create it if it is missing.
if (!LibraryExists(ctx, ctx.Web, libraryName))
{
CreateLibrary(ctx, ctx.Web, libraryName);
}
// Get the folder to upload into.
List docs = ctx.Web.Lists.GetByTitle(libraryName);
ctx.Load(docs, l => l.RootFolder);
// Get the information about the folder that will hold the file.
ctx.Load(docs.RootFolder, f => f.ServerRelativeUrl);
ctx.ExecuteQuery();
// File object.
Microsoft.SharePoint.Client.File uploadFile;
// Calculate block size in bytes.
int blockSize = fileChunkSizeInMB * 1024 * 1024;
// Get the information about the folder that will hold the file.
ctx.Load(docs.RootFolder, f => f.ServerRelativeUrl);
ctx.ExecuteQuery();
// Get the size of the file.
long fileSize = new FileInfo(fileName).Length;
if (fileSize <= blockSize)
{
// Use regular approach.
using (FileStream fs = new FileStream(fileName, FileMode.Open))
{
FileCreationInformation fileInfo = new FileCreationInformation();
fileInfo.ContentStream = fs;
fileInfo.Url = uniqueFileName;
fileInfo.Overwrite = true;
uploadFile = docs.RootFolder.Files.Add(fileInfo);
ctx.Load(uploadFile);
ctx.ExecuteQuery();
// Return the file object for the uploaded file.
return uploadFile;
}
}
else
{
// Use large file upload approach.
ClientResult<long> bytesUploaded = null;
FileStream fs = null;
try
{
fs = System.IO.File.Open(fileName, FileMode.Open, FileAccess.Read, FileShare.ReadWrite);
using (BinaryReader br = new BinaryReader(fs))
{
byte[] buffer = new byte[blockSize];
Byte[] lastBuffer = null;
long fileoffset = 0;
long totalBytesRead = 0;
int bytesRead;
bool first = true;
bool last = false;
// Read data from file system in blocks.
while ((bytesRead = br.Read(buffer, 0, buffer.Length)) > 0)
{
totalBytesRead = totalBytesRead + bytesRead;
// You've reached the end of the file.
if (totalBytesRead == fileSize)
{
last = true;
// Copy to a new buffer that has the correct size.
lastBuffer = new byte[bytesRead];
Array.Copy(buffer, 0, lastBuffer, 0, bytesRead);
}
if (first)
{
using (MemoryStream contentStream = new MemoryStream())
{
// Add an empty file.
FileCreationInformation fileInfo = new FileCreationInformation();
fileInfo.ContentStream = contentStream;
fileInfo.Url = uniqueFileName;
fileInfo.Overwrite = true;
uploadFile = docs.RootFolder.Files.Add(fileInfo);
// Start upload by uploading the first slice.
using (MemoryStream s = new MemoryStream(buffer))
{
// Call the start upload method on the first slice.
bytesUploaded = uploadFile.StartUpload(uploadId, s);
ctx.ExecuteQuery();//<------here exception
// fileoffset is the pointer where the next slice will be added.
fileoffset = bytesUploaded.Value;
}
// You can only start the upload once.
first = false;
}
}
else
{
// Get a reference to your file.
uploadFile = ctx.Web.GetFileByServerRelativeUrl(docs.RootFolder.ServerRelativeUrl + System.IO.Path.AltDirectorySeparatorChar + uniqueFileName);
if (last)
{
// Is this the last slice of data?
using (MemoryStream s = new MemoryStream(lastBuffer))
{
// End sliced upload by calling FinishUpload.
uploadFile = uploadFile.FinishUpload(uploadId, fileoffset, s);
ctx.ExecuteQuery();
// Return the file object for the uploaded file.
return uploadFile;
}
}
else
{
using (MemoryStream s = new MemoryStream(buffer))
{
// Continue sliced upload.
bytesUploaded = uploadFile.ContinueUpload(uploadId, fileoffset, s);
ctx.ExecuteQuery();
// Update fileoffset for the next slice.
fileoffset = bytesUploaded.Value;
}
}
}
} // while ((bytesRead = br.Read(buffer, 0, buffer.Length)) > 0)
}
}
finally
{
if (fs != null)
{
fs.Dispose();
}
}
}
return null;
}
But I'm getting runtime exception : ServerExecution with the message: Method "StartUpload" does not exist at line "ctx.ExecuteQuery();" (<-- I marked this line in the code)
I also tried with SharePoint2013 package and the method "startupload" doesn't supported in this package.
UPDATE:
Adam's code worked for ~1GB files it turns out that inside web.config in the path : C:\inetpub\wwwroot\wss\VirtualDirectories\{myport}\web.config
at the part <requestLimit maxAllowedContentLength="2000000000"/> that's in bytes and not kilobytes as I thougt at the begining, therefore I changed to 2000000000 and it worked.
method to upload 1 GB file on SP 2013 using CSOM that works (tested and developed for couple of days of trying different approaches :) )
try
{
Console.WriteLine("start " + DateTime.Now.ToLongDateString() + " " + DateTime.Now.ToLongTimeString());
using (ClientContext context = new ClientContext("[URL]"))
{
context.Credentials = new NetworkCredential("[LOGIN]","[PASSWORD]","[DOMAIN]");
context.RequestTimeout = -1;
Web web = context.Web;
if (context.HasPendingRequest)
context.ExecuteQuery();
byte[] fileBytes;
using (var fs = new FileStream(#"D:\OneGB.rar", FileMode.Open, FileAccess.Read))
{
fileBytes = new byte[fs.Length];
int bytesRead = fs.Read(fileBytes, 0, fileBytes.Length);
}
using (var fileStream = new System.IO.MemoryStream(fileBytes))
{
Microsoft.SharePoint.Client.File.SaveBinaryDirect(context, "/Shared Documents/" + "OneGB.rar", fileStream, true);
}
}
Console.WriteLine("end " + DateTime.Now.ToLongDateString() + " " + DateTime.Now.ToLongTimeString());
}
catch (Exception ex)
{
Console.WriteLine("error -> " + ex.Message);
}
finally
{
Console.ReadLine();
}
Besides this I had to:
extend the max file upload on CA for this web application,
set on CA for this web application 'web page security Validation' on
Never (in this link there is a screen how to set it)
extend timeout on IIS
and the final result is:
sorry for the lang but I usually work in PL
all history defined here post
Install the SharePoint Online CSOM library using the command below.
Install-Package Microsoft.SharePointOnline.CSOM -Version 16.1.8924.1200
Then use the code below to upload the large file.
int blockSize = 8000000; // 8 MB
string fileName = "C:\\temp\\6GBTest.odt", uniqueFileName = String.Empty;
long fileSize;
Microsoft.SharePoint.Client.File uploadFile = null;
Guid uploadId = Guid.NewGuid();
using (ClientContext ctx = new ClientContext("siteUrl"))
{
ctx.Credentials = new SharePointOnlineCredentials("user#tenant.onmicrosoft.com", GetSecurePassword());
List docs = ctx.Web.Lists.GetByTitle("Documents");
ctx.Load(docs.RootFolder, p => p.ServerRelativeUrl);
// Use large file upload approach
ClientResult<long> bytesUploaded = null;
FileStream fs = null;
try
{
fs = System.IO.File.Open(fileName, FileMode.Open, FileAccess.Read, FileShare.ReadWrite);
fileSize = fs.Length;
uniqueFileName = System.IO.Path.GetFileName(fs.Name);
using (BinaryReader br = new BinaryReader(fs))
{
byte[] buffer = new byte[blockSize];
byte[] lastBuffer = null;
long fileoffset = 0;
long totalBytesRead = 0;
int bytesRead;
bool first = true;
bool last = false;
// Read data from filesystem in blocks
while ((bytesRead = br.Read(buffer, 0, buffer.Length)) > 0)
{
totalBytesRead = totalBytesRead + bytesRead;
// We've reached the end of the file
if (totalBytesRead <= fileSize)
{
last = true;
// Copy to a new buffer that has the correct size
lastBuffer = new byte[bytesRead];
Array.Copy(buffer, 0, lastBuffer, 0, bytesRead);
}
if (first)
{
using (MemoryStream contentStream = new MemoryStream())
{
// Add an empty file.
FileCreationInformation fileInfo = new FileCreationInformation();
fileInfo.ContentStream = contentStream;
fileInfo.Url = uniqueFileName;
fileInfo.Overwrite = true;
uploadFile = docs.RootFolder.Files.Add(fileInfo);
// Start upload by uploading the first slice.
using (MemoryStream s = new MemoryStream(buffer))
{
// Call the start upload method on the first slice
bytesUploaded = uploadFile.StartUpload(uploadId, s);
ctx.ExecuteQuery();
// fileoffset is the pointer where the next slice will be added
fileoffset = bytesUploaded.Value;
}
// we can only start the upload once
first = false;
}
}
else
{
// Get a reference to our file
uploadFile = ctx.Web.GetFileByServerRelativeUrl(docs.RootFolder.ServerRelativeUrl + System.IO.Path.AltDirectorySeparatorChar + uniqueFileName);
if (last)
{
// Is this the last slice of data?
using (MemoryStream s = new MemoryStream(lastBuffer))
{
// End sliced upload by calling FinishUpload
uploadFile = uploadFile.FinishUpload(uploadId, fileoffset, s);
ctx.ExecuteQuery();
// return the file object for the uploaded file
return uploadFile;
}
}
else
{
using (MemoryStream s = new MemoryStream(buffer))
{
// Continue sliced upload
bytesUploaded = uploadFile.ContinueUpload(uploadId, fileoffset, s);
ctx.ExecuteQuery();
// update fileoffset for the next slice
fileoffset = bytesUploaded.Value;
}
}
}
}
}
}
finally
{
if (fs != null)
{
fs.Dispose();
}
}
}
Or download the example code from GitHub.
Large file upload with CSOM
I'm looking for a way to upload 1GB file to SharePoint 2013
You can change the upload limit with the PowerShell below:
$a = [Microsoft.SharePoint.Administration.SPWebService]::ContentService
$a.ClientRequestServiceSettings.MaxReceivedMessageSize = 209715200
$a.Update()
References:
https://thuansoldier.net/4328/
https://blogs.msdn.microsoft.com/sridhara/2010/03/12/uploading-files-using-client-object-model-in-sharepoint-2010/
https://social.msdn.microsoft.com/Forums/en-US/09a41ba4-feda-4cf3-aa29-704cd92b9320/csom-microsoftsharepointclientserverexception-method-8220startupload8221-does-not-exist?forum=sharepointdevelopment
Update:
SharePoint CSOM request size is very limited, it cannot exceed a 2 MB limit and you cannot change this setting in Office 365 environment. If you have to upload bigger files you have to use REST API. Here is MSDN reference https://msdn.microsoft.com/en-us/library/office/dn292553.aspx
Also see:
https://gist.github.com/vgrem/10713514
File Upload to SharePoint 2013 using REST API
Ref: https://sharepoint.stackexchange.com/posts/149105/edit (see the 2nd answer).

C# should I close the streams when im using "using"?

I have a service running on a server that zip files and I notice that each day the memory consumed increases, when I deployed it on the server it was consuming 3.6Mb, today, 3 months later it was consuming 180Mb.
This is part of the code that I'm using:
for (i = 0; i < files.Count; i++)
{
try
{
if (File.Exists(#dir + zipToUpdate) && new FileInfo(#dir + zipToUpdate).Length < 104857600)
{
using (FileStream zipToOpen = new FileStream(#dir + zipToUpdate, FileMode.Open))
{
using (ZipArchive archive = new ZipArchive(zipToOpen, ZipArchiveMode.Update, false))
{
if (File.GetCreationTime(#dir + files.ElementAt(i)).AddHours(FileAge) < DateTime.Now)
{
ZipArchiveEntry fileEntry = archive.CreateEntry(files.ElementAt(i));
using (BinaryWriter writer = new BinaryWriter(fileEntry.Open()))
{
using (FileStream sr = new FileStream(#dir + files.ElementAt(i), FileMode.Open, FileAccess.Read))
{
byte[] block = new byte[32768];
int bytesRead = 0;
while ((bytesRead = sr.Read(block, 0, block.Length)) > 0)
{
writer.Write(block, 0, bytesRead);
block = new byte[32768];
}
}
}
File.Delete(#dir + files.ElementAt(i));
}
}
}
}
else
{
createZip(files.GetRange(i, files.Count-i), dir + "\\", getZipName(dir, zipToUpdate));
return;
}
}
catch (Exception ex)
{
rootlog.Error(string.Format("Erro Run - updateZip: {0}", ex.Message));
}
}
The creation of the zip or the update are similar so there is no point in paste both codes.
I do a recursive call of this for the folders inside and the service runs once each hour.
So, my question is if all these streams is what is making my memory usage increase month after month or if it can be something else.
The using statement takes care of closing the IDisposable object that it opens. This is not the source of the potential memory leak you're observing.

Creating zip file of multiple byte[] arrays / files

I am trying to create a zip file that contains zip files inside of it. I am using ICSharpCode.SharpZipLib (must use this due to project restrictions). this works fine if I have only 1 byte[] array.. but it is not working for list of byte[].
foreach (byte[] internalZipFile in zipFiles)
{
// Source : internal zip file
MemoryStream inputMemoryStream = new MemoryStream(internalZipFile);
ZipEntry newZipEntry = new ZipEntry("AdManifest-" + i.ToString() + ".zip");
newZipEntry.DateTime = DateTime.Now;
zipStream.PutNextEntry(newZipEntry);
StreamUtils.Copy(inputMemoryStream, zipStream, new byte[1024]);
zipStream.CloseEntry();
zipStream.IsStreamOwner = false; // to stop the close and underlying stream
zipStream.Close();
outputMemoryStream.Position = 0;
zipByteArray = outputMemoryStream.ToArray();
i++;
}
using (FileStream fileStream = new FileStream(#"c:\manifest.zip", FileMode.Create))
{
fileStream.Write(zipByteArray, 0, zipByteArray.Length);
}
Can someone please assist? what am i missing?
I figured this out.
here us the one working for me :
byte[] zipByteArray = null;
int i = 0;
if (zipFiles != null && zipFiles.Count > 0)
{
MemoryStream outputMemoryStream = new MemoryStream();
ZipOutputStream zipStream = new ZipOutputStream(outputMemoryStream);
zipStream.SetLevel(3);
foreach (byte[] internalZipFile in zipFiles)
{
MemoryStream inputMemoryStream = new MemoryStream(internalZipFile);
ZipEntry newZipEntry = new ZipEntry("AdManifest-" + i.ToString() + ".zip");
newZipEntry.DateTime = DateTime.Now;
newZipEntry.Size = internalZipFile.Length;
zipStream.PutNextEntry(newZipEntry);
StreamUtils.Copy(inputMemoryStream, zipStream, new byte[1024]);
zipStream.CloseEntry();
i++;
}
zipStream.IsStreamOwner = false; // to stop the close and underlying stream
zipStream.Close();
outputMemoryStream.Position = 0;
zipByteArray = outputMemoryStream.ToArray();
using (FileStream fileStream = new FileStream(#"c:\manifest.zip", FileMode.Create))
{
fileStream.Write(zipByteArray, 0, zipByteArray.Length);
}
}
I can't try it, but i think than you need less code in iteration body
Plus that i've removed outpustmemorystream and used only zipStream.
foreach (byte[] internalZipFile in zipFiles)
{
// Source : internal zip file
MemoryStream inputMemoryStream = new MemoryStream(internalZipFile);
ZipEntry newZipEntry = new ZipEntry("AdManifest-" + i.ToString() + ".zip");
newZipEntry.DateTime = DateTime.Now;
zipStream.PutNextEntry(newZipEntry);
StreamUtils.Copy(inputMemoryStream, zipStream, new byte[1024]);
zipStream.CloseEntry();
i++;
}
zipStream.IsStreamOwner = false; // to stop the close and underlying stream
zipStream.Position = 0;
zipByteArray = zipStream.ToArray();
zipStream.Close();
using (FileStream fileStream = new FileStream(#"c:\manifest.zip", FileMode.Create))
{
fileStream.Write(zipByteArray, 0, zipByteArray.Length);
}

Silverlight becomes unresponsive when downloading a file

I'm trying to use the following snippet in order download files via the SaveFileDialog in Silverlight:
public void SaveMediaLocal(string fileName)
{
FileInfo fInfo = new FileInfo(fileName);
if (fInfo.Exists)
{
if (fInfo.Length > 0)
{
string extension = fInfo.Extension;
SaveFileDialog dialog = new SaveFileDialog()
{
DefaultExt = extension,
Filter = String.Format("{1} files (*.{0})|*.{0}|All files (*.*)|*.*", extension, fInfo.Extension),
FilterIndex = 1,
DefaultFileName = fInfo.Name
};
if (dialog.ShowDialog() == true)
{
try
{
bool cancelFlag = false;
byte[] buffer = new byte[1024 * 1024]; // 1GB buffer
using (FileStream dest = (FileStream)dialog.OpenFile())
{
using (FileStream source = new FileStream(fInfo.FullName, FileMode.Open, FileAccess.Read))
{
long fileLength = source.Length;
long totalBytes = 0;
int currentBlockSize = 0;
while ((currentBlockSize = source.Read(buffer, 0, buffer.Length)) > 0)
{
totalBytes += currentBlockSize;
double percentage = (double)totalBytes * 100.0 / fileLength;
dest.Write(buffer, 0, currentBlockSize);
}
}
}
}
catch
{
}
}
}
else
{
//no results
}
}
}
When I use this snippet; Silverlight freezes until the download completes.
When I use this snippet instead, the UI is responsive, but doesn't work on bigger files.
using (Stream stream = dialog.OpenFile())
{
Byte[] bytes = File.ReadAllBytes(fileName);
stream.Write(bytes, 0, bytes.Length);
}
Is there something that I'm missing here?
Don't do the operation on the GUI thread. That is why it gets unresponsive. Either create a new thread or async process and do the operation in the background.

Categories

Resources