Read large files - 2GB+ for Google Drive API Upload - c#

I'm currently working on a small backup tool written in C# that is supposed to upload files contained within a specified folder to Google Drive via its API. The program largely functions as it's supposed to, the only problem that it is unable to handle files larger than 2GB.
The problem is caused by the upload function itself which is attached down below, it uses a byte array to read the file to subsequently create a Memory Stream. As far as I'm aware (I'm still a beginner when it comes to c#), a byte array can only contain 2GB of information before returning an overflow exception. To combat this I've tried to utilize FileStream.Read (second bit of code attached below) instead of System.IO.File.ReadAllBytes, though this again lead to an overflow exception of the byte Array. I know that at this point I'd have to split the file up, however, due to the rather limited documentation of the GDrive API for C# - at least from what I've seen - and my limited knowledge of C# I've got little to no clue on how to tackle this problem.
I'm sorry for the long read, all help on this matter is highly appreciated.
Upload Function V1 (System.IO.File.ReadAllBytes):
private static Google.Apis.Drive.v3.Data.File UploadFile(Boolean useFolder, String mime, DriveService _service, string _uploadFile, string _parent, string _descrp = "")
{
if (System.IO.File.Exists(_uploadFile))
{
Google.Apis.Drive.v3.Data.File body = new Google.Apis.Drive.v3.Data.File
{
Name = System.IO.Path.GetFileName(_uploadFile),
Description = _descrp,
MimeType = mime
};
if (useFolder)
{
body.Parents = new List<string> { _parent };
}
byte[] byteArray = System.IO.File.ReadAllBytes(_uploadFile);
MemoryStream stream = new System.IO.MemoryStream(byteArray);
try
{
FilesResource.CreateMediaUpload request = _service.Files.Create(body, stream, mime);
request.SupportsTeamDrives = true;
request.Upload();
return request.ResponseBody;
}
catch (Exception e)
{
Console.WriteLine("Error Occured: " + e);
return null;
}
}
else
{
Console.WriteLine("The file does not exist. 404");
return null;
}
}
Upload Method V2 (FileStream):
private static Google.Apis.Drive.v3.Data.File UploadFile(Boolean useFolder, String mime, DriveService _service, string _uploadFile, string _parent, string _descrp = "")
{
if (System.IO.File.Exists(_uploadFile))
{
Google.Apis.Drive.v3.Data.File body = new Google.Apis.Drive.v3.Data.File
{
Name = System.IO.Path.GetFileName(_uploadFile),
Description = _descrp,
MimeType = mime
};
if (useFolder)
{
body.Parents = new List<string> { _parent };
}
//byte[] byteArray = System.IO.File.ReadAllBytes(_uploadFile);
using (FileStream fileStream = new FileStream(_uploadFile, FileMode.Open, FileAccess.Read))
{
Console.WriteLine("ByteArrayStart");
byte[] byteArray = new byte[fileStream.Length];
int bytesToRead = (int)fileStream.Length;
int bytesRead = 0;
while (bytesRead > 0)
{
int n = fileStream.Read(byteArray, bytesRead, bytesToRead);
if (n == 0)
{
break;
}
bytesRead += n;
Console.WriteLine("Bytes Read: " + bytesRead);
bytesToRead -= n;
Console.WriteLine("Bytes to Read: " + bytesToRead);
}
bytesToRead = byteArray.Length;
MemoryStream stream = new System.IO.MemoryStream(byteArray);
try
{
FilesResource.CreateMediaUpload request = _service.Files.Create(body, stream, mime);
request.SupportsTeamDrives = true;
request.Upload();
return request.ResponseBody;
}
catch (Exception e)
{
Console.WriteLine("Error Occured: " + e);
return null;
}
}
}
else
{
Console.WriteLine("The file does not exist. 404");
return null;
}
}

MemoryStream's constructors only work with byte arrays that are limited to Int32.MaxValue bytes. Why not just use your FileStream object directly?
var fileMetadata = new Google.Apis.Drive.v3.Data.File()
{
Name = "flag.jpg"
};
FilesResource.CreateMediaUpload request;
using (var stream = new System.IO.FileStream(#"C:\temp\flag.jpg", System.IO.FileMode.Open))
{
request = service.Files.Create(fileMetadata, stream, "image/jpeg");
request.Fields = "id";
request.Upload();
}
var file = request.ResponseBody;
Really a file that big you should be using resumable upload but im going to have to dig around for some sample code for that.

Related

Asynchrounous Chunked File Upload in C#

I'm trying to asynchronously upload file chunks from a .Net client to a php web server using HttpClient in C#.
I can chunk the file just fine, and upload those chunks to the remote server, but I'm not sure if this is really running asynchronously. Ideally, I would upload the chunks in parallel to maximize upload speed. My code is as follows:
//Call to chunk and upload file in another async method. I'm only showing the call here:
FileStream fileStream = new FileStream(fileNameIn, FileMode.Open, FileAccess.Read);
await ChunkFileAsync(fileStream, uploadFile.Name, url);
// To chunk the file
public static async Task<bool> ChunkFileAsync(FileStream fileStream, string fileName, string url)
{
int chunkSize = 102400; // Upload 1mb at a time
int totalChunks = (int)Math.Ceiling((double)fileStream.Length / chunkSize);
// Loop through the whole stream and send it chunk by chunk asynchronously;
bool retVal = true;
List<Task> tasks = new List<Task>();
try {
for (int i = 0; i < totalChunks; i++)
{
int startIndex = i * chunkSize;
int endIndex = (int)(startIndex + chunkSize > fileStream.Length ? fileStream.Length : startIndex + chunkSize);
int length = endIndex - startIndex;
byte[] bytes = new byte[length];
fileStream.Read(bytes, 0, bytes.Length);
Task task = SendChunkRequest(fileName, url, i, bytes);
tasks.Add(task);
retVal = true;
}
await Task.WhenAll(tasks);
}
catch (WebException e)
{
Console.WriteLine("ERROR - There was an error chunking the file before sending the request " + e.Message);
retVal = false;
}
return retVal;
}
//To upload chunks to remote server
public static async Task<bool> SendChunkRequest(string fileName, string url, int loopCounter, Byte[] bytes)
{
bool response = false;
try
{
ASCIIEncoding ascii = new ASCIIEncoding();
ByteArrayContent data = new ByteArrayContent(bytes);
data.Headers.ContentType = System.Net.Http.Headers.MediaTypeHeaderValue.Parse("multipart/form-data");
byte[] strParamsBytes = ascii.GetBytes(strVals);
HttpClient requestToServer = new HttpClient();
MultipartFormDataContent form = new MultipartFormDataContent();
form.Add(data, "file", fileName + loopCounter);
await requestToServer.PostAsync(url, form);
requestToServer.Dispose();
response = true;
}
catch (Exception e)
{
Console.WriteLine("There was an exception: " + e);
}
return response;
}
If I upload a 100Mb file, I see all ten chunks uploaded to the server one at a time. Can I make this code more efficient? Any help is greatly appreciated.

Upload file chunks to SPS 2013 - Method "StartUpload" does not exist at line

I am trying to upload a large file (1 GB) from code to SharePoint 2013 on prem. I followed this tutorial, I dowloaded from NuGet the package "Microsoft.SharePointOnline.CSOM" and tried this piece of code:
public Microsoft.SharePoint.Client.File UploadFileSlicePerSlice(ClientContext ctx, string libraryName, string fileName, int fileChunkSizeInMB = 3)
{
// Each sliced upload requires a unique ID.
Guid uploadId = Guid.NewGuid();
// Get the name of the file.
string uniqueFileName = Path.GetFileName(fileName);
// Ensure that target library exists, and create it if it is missing.
if (!LibraryExists(ctx, ctx.Web, libraryName))
{
CreateLibrary(ctx, ctx.Web, libraryName);
}
// Get the folder to upload into.
List docs = ctx.Web.Lists.GetByTitle(libraryName);
ctx.Load(docs, l => l.RootFolder);
// Get the information about the folder that will hold the file.
ctx.Load(docs.RootFolder, f => f.ServerRelativeUrl);
ctx.ExecuteQuery();
// File object.
Microsoft.SharePoint.Client.File uploadFile;
// Calculate block size in bytes.
int blockSize = fileChunkSizeInMB * 1024 * 1024;
// Get the information about the folder that will hold the file.
ctx.Load(docs.RootFolder, f => f.ServerRelativeUrl);
ctx.ExecuteQuery();
// Get the size of the file.
long fileSize = new FileInfo(fileName).Length;
if (fileSize <= blockSize)
{
// Use regular approach.
using (FileStream fs = new FileStream(fileName, FileMode.Open))
{
FileCreationInformation fileInfo = new FileCreationInformation();
fileInfo.ContentStream = fs;
fileInfo.Url = uniqueFileName;
fileInfo.Overwrite = true;
uploadFile = docs.RootFolder.Files.Add(fileInfo);
ctx.Load(uploadFile);
ctx.ExecuteQuery();
// Return the file object for the uploaded file.
return uploadFile;
}
}
else
{
// Use large file upload approach.
ClientResult<long> bytesUploaded = null;
FileStream fs = null;
try
{
fs = System.IO.File.Open(fileName, FileMode.Open, FileAccess.Read, FileShare.ReadWrite);
using (BinaryReader br = new BinaryReader(fs))
{
byte[] buffer = new byte[blockSize];
Byte[] lastBuffer = null;
long fileoffset = 0;
long totalBytesRead = 0;
int bytesRead;
bool first = true;
bool last = false;
// Read data from file system in blocks.
while ((bytesRead = br.Read(buffer, 0, buffer.Length)) > 0)
{
totalBytesRead = totalBytesRead + bytesRead;
// You've reached the end of the file.
if (totalBytesRead == fileSize)
{
last = true;
// Copy to a new buffer that has the correct size.
lastBuffer = new byte[bytesRead];
Array.Copy(buffer, 0, lastBuffer, 0, bytesRead);
}
if (first)
{
using (MemoryStream contentStream = new MemoryStream())
{
// Add an empty file.
FileCreationInformation fileInfo = new FileCreationInformation();
fileInfo.ContentStream = contentStream;
fileInfo.Url = uniqueFileName;
fileInfo.Overwrite = true;
uploadFile = docs.RootFolder.Files.Add(fileInfo);
// Start upload by uploading the first slice.
using (MemoryStream s = new MemoryStream(buffer))
{
// Call the start upload method on the first slice.
bytesUploaded = uploadFile.StartUpload(uploadId, s);
ctx.ExecuteQuery();//<------here exception
// fileoffset is the pointer where the next slice will be added.
fileoffset = bytesUploaded.Value;
}
// You can only start the upload once.
first = false;
}
}
else
{
// Get a reference to your file.
uploadFile = ctx.Web.GetFileByServerRelativeUrl(docs.RootFolder.ServerRelativeUrl + System.IO.Path.AltDirectorySeparatorChar + uniqueFileName);
if (last)
{
// Is this the last slice of data?
using (MemoryStream s = new MemoryStream(lastBuffer))
{
// End sliced upload by calling FinishUpload.
uploadFile = uploadFile.FinishUpload(uploadId, fileoffset, s);
ctx.ExecuteQuery();
// Return the file object for the uploaded file.
return uploadFile;
}
}
else
{
using (MemoryStream s = new MemoryStream(buffer))
{
// Continue sliced upload.
bytesUploaded = uploadFile.ContinueUpload(uploadId, fileoffset, s);
ctx.ExecuteQuery();
// Update fileoffset for the next slice.
fileoffset = bytesUploaded.Value;
}
}
}
} // while ((bytesRead = br.Read(buffer, 0, buffer.Length)) > 0)
}
}
finally
{
if (fs != null)
{
fs.Dispose();
}
}
}
return null;
}
But I'm getting runtime exception : ServerExecution with the message: Method "StartUpload" does not exist at line "ctx.ExecuteQuery();" (<-- I marked this line in the code)
I also tried with SharePoint2013 package and the method "startupload" doesn't supported in this package.
UPDATE:
Adam's code worked for ~1GB files it turns out that inside web.config in the path : C:\inetpub\wwwroot\wss\VirtualDirectories\{myport}\web.config
at the part <requestLimit maxAllowedContentLength="2000000000"/> that's in bytes and not kilobytes as I thougt at the begining, therefore I changed to 2000000000 and it worked.
method to upload 1 GB file on SP 2013 using CSOM that works (tested and developed for couple of days of trying different approaches :) )
try
{
Console.WriteLine("start " + DateTime.Now.ToLongDateString() + " " + DateTime.Now.ToLongTimeString());
using (ClientContext context = new ClientContext("[URL]"))
{
context.Credentials = new NetworkCredential("[LOGIN]","[PASSWORD]","[DOMAIN]");
context.RequestTimeout = -1;
Web web = context.Web;
if (context.HasPendingRequest)
context.ExecuteQuery();
byte[] fileBytes;
using (var fs = new FileStream(#"D:\OneGB.rar", FileMode.Open, FileAccess.Read))
{
fileBytes = new byte[fs.Length];
int bytesRead = fs.Read(fileBytes, 0, fileBytes.Length);
}
using (var fileStream = new System.IO.MemoryStream(fileBytes))
{
Microsoft.SharePoint.Client.File.SaveBinaryDirect(context, "/Shared Documents/" + "OneGB.rar", fileStream, true);
}
}
Console.WriteLine("end " + DateTime.Now.ToLongDateString() + " " + DateTime.Now.ToLongTimeString());
}
catch (Exception ex)
{
Console.WriteLine("error -> " + ex.Message);
}
finally
{
Console.ReadLine();
}
Besides this I had to:
extend the max file upload on CA for this web application,
set on CA for this web application 'web page security Validation' on
Never (in this link there is a screen how to set it)
extend timeout on IIS
and the final result is:
sorry for the lang but I usually work in PL
all history defined here post
Install the SharePoint Online CSOM library using the command below.
Install-Package Microsoft.SharePointOnline.CSOM -Version 16.1.8924.1200
Then use the code below to upload the large file.
int blockSize = 8000000; // 8 MB
string fileName = "C:\\temp\\6GBTest.odt", uniqueFileName = String.Empty;
long fileSize;
Microsoft.SharePoint.Client.File uploadFile = null;
Guid uploadId = Guid.NewGuid();
using (ClientContext ctx = new ClientContext("siteUrl"))
{
ctx.Credentials = new SharePointOnlineCredentials("user#tenant.onmicrosoft.com", GetSecurePassword());
List docs = ctx.Web.Lists.GetByTitle("Documents");
ctx.Load(docs.RootFolder, p => p.ServerRelativeUrl);
// Use large file upload approach
ClientResult<long> bytesUploaded = null;
FileStream fs = null;
try
{
fs = System.IO.File.Open(fileName, FileMode.Open, FileAccess.Read, FileShare.ReadWrite);
fileSize = fs.Length;
uniqueFileName = System.IO.Path.GetFileName(fs.Name);
using (BinaryReader br = new BinaryReader(fs))
{
byte[] buffer = new byte[blockSize];
byte[] lastBuffer = null;
long fileoffset = 0;
long totalBytesRead = 0;
int bytesRead;
bool first = true;
bool last = false;
// Read data from filesystem in blocks
while ((bytesRead = br.Read(buffer, 0, buffer.Length)) > 0)
{
totalBytesRead = totalBytesRead + bytesRead;
// We've reached the end of the file
if (totalBytesRead <= fileSize)
{
last = true;
// Copy to a new buffer that has the correct size
lastBuffer = new byte[bytesRead];
Array.Copy(buffer, 0, lastBuffer, 0, bytesRead);
}
if (first)
{
using (MemoryStream contentStream = new MemoryStream())
{
// Add an empty file.
FileCreationInformation fileInfo = new FileCreationInformation();
fileInfo.ContentStream = contentStream;
fileInfo.Url = uniqueFileName;
fileInfo.Overwrite = true;
uploadFile = docs.RootFolder.Files.Add(fileInfo);
// Start upload by uploading the first slice.
using (MemoryStream s = new MemoryStream(buffer))
{
// Call the start upload method on the first slice
bytesUploaded = uploadFile.StartUpload(uploadId, s);
ctx.ExecuteQuery();
// fileoffset is the pointer where the next slice will be added
fileoffset = bytesUploaded.Value;
}
// we can only start the upload once
first = false;
}
}
else
{
// Get a reference to our file
uploadFile = ctx.Web.GetFileByServerRelativeUrl(docs.RootFolder.ServerRelativeUrl + System.IO.Path.AltDirectorySeparatorChar + uniqueFileName);
if (last)
{
// Is this the last slice of data?
using (MemoryStream s = new MemoryStream(lastBuffer))
{
// End sliced upload by calling FinishUpload
uploadFile = uploadFile.FinishUpload(uploadId, fileoffset, s);
ctx.ExecuteQuery();
// return the file object for the uploaded file
return uploadFile;
}
}
else
{
using (MemoryStream s = new MemoryStream(buffer))
{
// Continue sliced upload
bytesUploaded = uploadFile.ContinueUpload(uploadId, fileoffset, s);
ctx.ExecuteQuery();
// update fileoffset for the next slice
fileoffset = bytesUploaded.Value;
}
}
}
}
}
}
finally
{
if (fs != null)
{
fs.Dispose();
}
}
}
Or download the example code from GitHub.
Large file upload with CSOM
I'm looking for a way to upload 1GB file to SharePoint 2013
You can change the upload limit with the PowerShell below:
$a = [Microsoft.SharePoint.Administration.SPWebService]::ContentService
$a.ClientRequestServiceSettings.MaxReceivedMessageSize = 209715200
$a.Update()
References:
https://thuansoldier.net/4328/
https://blogs.msdn.microsoft.com/sridhara/2010/03/12/uploading-files-using-client-object-model-in-sharepoint-2010/
https://social.msdn.microsoft.com/Forums/en-US/09a41ba4-feda-4cf3-aa29-704cd92b9320/csom-microsoftsharepointclientserverexception-method-8220startupload8221-does-not-exist?forum=sharepointdevelopment
Update:
SharePoint CSOM request size is very limited, it cannot exceed a 2 MB limit and you cannot change this setting in Office 365 environment. If you have to upload bigger files you have to use REST API. Here is MSDN reference https://msdn.microsoft.com/en-us/library/office/dn292553.aspx
Also see:
https://gist.github.com/vgrem/10713514
File Upload to SharePoint 2013 using REST API
Ref: https://sharepoint.stackexchange.com/posts/149105/edit (see the 2nd answer).

YouTube Direct Upload - OutOfMemory Exception

Whenever I try to upload a large video via Direct Upload using the YouTube API. I get an OutOfMemory Exception. Is there anything I can do to get rid of this? The YouTube API does not say anything about video size limit using direct upload.
I gave up on the Direct Upload. Now I trying the resumable upload way. My code is below.
YouTubeRequest request;
YouTubeRequestSettings settings = new YouTubeRequestSettings("YouTube Upload", Client Key, "Username", "Password");
request = new YouTubeRequest(settings);
Video newVideo = new Video();
ResumableUploader m_ResumableUploader = null;
Authenticator YouTubeAuthenticator;
m_ResumableUploader = new ResumableUploader(256); //chunksize 256 kilobyte
m_ResumableUploader.AsyncOperationCompleted += new AsyncOperationCompletedEventHandler(m_ResumableUploader_AsyncOperationCompleted);
m_ResumableUploader.AsyncOperationProgress += new AsyncOperationProgressEventHandler(m_ResumableUploader_AsyncOperationProgress);
YouTubeAuthenticator = new ClientLoginAuthenticator("YouTubeUploader", ServiceNames.YouTube, "kjohnson#resoluteinnovations.com", "password");
//AtomLink link = new AtomLink("http://uploads.gdata.youtube.com/resumable/feeds/api/users/uploads");
//link.Rel = ResumableUploader.CreateMediaRelation;
//newVideo.YouTubeEntry.Links.Add(link);
System.IO.FileStream stream = new System.IO.FileStream(filePath, System.IO.FileMode.Open, System.IO.FileAccess.Read);
byte[] chunk = new byte[256000];int count = 1;
while (true) {
int index = 0;
while (index < chunk.Length) {
int bytesRead = stream.Read(chunk, index, chunk.Length - index);
if (bytesRead == 0) {
break;
}
index += bytesRead;
}
if (index != 0) { // Our previous chunk may have been the last one
newVideo.MediaSource = new MediaFileSource(new MemoryStream(chunk), filePath, "video/quicktime");
if (count == 1) {
m_ResumableUploader.InsertAsync(YouTubeAuthenticator, newVideo.YouTubeEntry, new MemoryStream(chunk));
count++;
}
else
m_ResumableUploader.ResumeAsync(YouTubeAuthenticator, new Uri("http://uploads.gdata.youtube.com/resumable/feeds/api/users/uploads"), "POST", new MemoryStream(chunk), "video/quicktime", new object());
}
if (index != chunk.Length) { // We didn't read a full chunk: we're done
break;
}
}
Can anyone tell me what is wrong? My 2 GB video not uploading.
The reason I was getting a 403 Forbidden error was due to the fact that I was not passing in:
Username & Password
A developer key
The request variable in the code above is not being used/sent in the upload. Therefore I was doing an unauthorized upload.
Chances are that you are not disposing your objects. Ensure all disposable objects are within a using statement..
For example, this code will upload a large zip file to a server:
try
{
using (Stream ftpStream = FTPRequest.GetRequestStream())
{
using (FileStream file = File.OpenRead(ImagesZipFile))
{
// set up variables we'll use to read the file
int length = 1024;
byte[] buffer = new byte[length];
int bytesRead = 0;
// write the file to the request stream
do
{
bytesRead = file.Read(buffer, 0, length);
ftpStream.Write(buffer, 0, bytesRead);
}
while (bytesRead != 0);
}
}
}
catch (Exception e)
{
// throw the exception
throw e;
}

Can't download complete image file from skydrive using REST API

I'm working on a quick wrapper for the skydrive API in C#, but running into issues with downloading a file. For the first part of the file, everything comes through fine, but then there start to be differences in the file and shortly thereafter everything becomes null. I'm fairly sure that it's just me not reading the stream correctly.
This is the code I'm using to download the file:
public const string ApiVersion = "v5.0";
public const string BaseUrl = "https://apis.live.net/" + ApiVersion + "/";
public SkyDriveFile DownloadFile(SkyDriveFile file)
{
string uri = BaseUrl + file.ID + "/content";
byte[] contents = GetResponse(uri);
file.Contents = contents;
return file;
}
public byte[] GetResponse(string url)
{
checkToken();
Uri requestUri = new Uri(url + "?access_token=" + HttpUtility.UrlEncode(token.AccessToken));
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(requestUri);
request.Method = WebRequestMethods.Http.Get;
WebResponse response = request.GetResponse();
Stream responseStream = response.GetResponseStream();
byte[] contents = new byte[response.ContentLength];
responseStream.Read(contents, 0, (int)response.ContentLength);
return contents;
}
This is the image file I'm trying to download
And this is the image I am getting
These two images lead me to believe that I'm not waiting for the response to finish coming through, because the content-length is the same as the size of the image I'm expecting, but I'm not sure how to make my code wait for the entire response to come through or even really if that's the approach I need to take.
Here's my test code in case it's helpful
[TestMethod]
public void CanUploadAndDownloadFile()
{
var api = GetApi();
SkyDriveFolder folder = api.CreateFolder(null, "TestFolder", "Test Folder");
SkyDriveFile file = api.UploadFile(folder, TestImageFile, "TestImage.png");
file = api.DownloadFile(file);
api.DeleteFolder(folder);
byte[] contents = new byte[new FileInfo(TestImageFile).Length];
using (FileStream fstream = new FileStream(TestImageFile, FileMode.Open))
{
fstream.Read(contents, 0, contents.Length);
}
using (FileStream fstream = new FileStream(TestImageFile + "2", FileMode.CreateNew))
{
fstream.Write(file.Contents, 0, file.Contents.Length);
}
Assert.AreEqual(contents.Length, file.Contents.Length);
bool sameData = true;
for (int i = 0; i < contents.Length && sameData; i++)
{
sameData = contents[i] == file.Contents[i];
}
Assert.IsTrue(sameData);
}
It fails at Assert.IsTrue(sameData);
This is because you don't check the return value of responseStream.Read(contents, 0, (int)response.ContentLength);. Read doesn't ensure that it will read response.ContentLength bytes. Instead it returns the number of bytes read. You can use a loop or stream.CopyTo there.
Something like this:
WebResponse response = request.GetResponse();
MemoryStream m = new MemoryStream();
response.GetResponseStream().CopyTo(m);
byte[] contents = m.ToArray();
As LB already said, you need to continue to call Read() until you have read the entire stream.
Although Stream.CopyTo will copy the entire stream it does not ensure that read the number of bytes expected. The following method will solve this and raise an IOException if it does not read the length specified...
public static void Copy(Stream input, Stream output, long length)
{
byte[] bytes = new byte[65536];
long bytesRead = 0;
int len = 0;
while (0 != (len = input.Read(bytes, 0, Math.Min(bytes.Length, (int)Math.Min(int.MaxValue, length - bytesRead)))))
{
output.Write(bytes, 0, len);
bytesRead = bytesRead + len;
}
output.Flush();
if (bytesRead != length)
throw new IOException();
}

Unzipping a file error

I am using the SharpZipLib open source .net library from www.icsharpcode.net
My goal is to unzip an xml file and read it into a dataset. However I get the following error reading the file into a dataset: "Data at the root level is invalid. Line 1, position 1."
I believe what is happening is the unzipping code is not releasing the file for the following reasons.
1.) If I unzip the file and exit the application. When I restart the app I CAN read the unzipped file into a dataset.
2.) If I read in the xml file right after writing it out (no zipping) then it works fine.
3.) If I write the dataset to xml, zip it up, unzip it, then attempt to read it back in I get the exception.
The code below is pretty straight forward. UnZipFile will return the name of the file just unzipped. Right below this call is the call to read it into a dataset. The variable fileToRead is the full path to the newly unzipped xml file.
string fileToRead = UnZipFile(filepath, DOViewerUploadStoreArea);
ds.ReadXml(fileToRead )
private string UnZipFile(string file, string dirToUnzipTo)
{
string unzippedfile = "";
try
{
ZipInputStream s = new ZipInputStream(File.OpenRead(file));
ZipEntry myEntry;
string tmpEntry = String.Empty;
while ((myEntry = s.GetNextEntry()) != null)
{
string directoryName = dirToUnzipTo;
string fileName = Path.GetFileName(myEntry.Name);
string fileWDir = directoryName + fileName;
unzippedfile = fileWDir;
FileStream streamWriter = File.Create(fileWDir);
int size = 4096;
byte[] data = new byte[4096];
while (true)
{
size = s.Read(data, 0, data.Length);
if (size > 0) { streamWriter.Write(data, 0, size); }
else { break; }
}
streamWriter.Close();
}
s.Close();
}
catch (Exception ex)
{
LogStatus.WriteErrorLog(ex, "ERROR", "DOViewer.UnZipFile");
}
return (unzippedfile);
}
Well, what does the final file look like? (compared to the original). You don't show the zipping code, which might be part of the puzzle, especially as you are partially swallowing the exception.
I would also try ensuring everything IDisposable is Dispose()d, ideally via using; also - in case the problem is with path construction, use Path.Combine. And note that if myEntry.Name contains sub-directories, you will need to create them manually.
Here's what I have - it works for unzipping ICSharpCode.SharpZipLib.dll:
private string UnZipFile(string file, string dirToUnzipTo)
{
string unzippedfile = "";
try
{
using(Stream inStream = File.OpenRead(file))
using (ZipInputStream s = new ZipInputStream(inStream))
{
ZipEntry myEntry;
byte[] data = new byte[4096];
while ((myEntry = s.GetNextEntry()) != null)
{
string fileWDir = Path.Combine(dirToUnzipTo, myEntry.Name);
string dir = Path.GetDirectoryName(fileWDir);
// note only supports a single level of sub-directories...
if (!Directory.Exists(dir)) Directory.CreateDirectory(dir);
unzippedfile = fileWDir; // note; returns last file if multiple
using (FileStream outStream = File.Create(fileWDir))
{
int size;
while ((size = s.Read(data, 0, data.Length)) > 0)
{
outStream.Write(data, 0, size);
}
outStream.Close();
}
}
s.Close();
}
}
catch (Exception ex)
{
Console.WriteLine(ex);
}
return (unzippedfile);
}
It could also be that the problem is either in the code that writes the zip, or the code that reads the generated file.
I compared the original with the final using TextPad and they are identical.
Also I rewrote the code to take advantage of the using. Here is the code.
My issue seems to be centered around file locking or something. If I unzip the file quit the application then start it up it will read find.
private string UnZipFile(string file, string dirToUnzipTo)
{
string unzippedfile = "";
try
{
using (ZipInputStream s = new ZipInputStream(File.OpenRead(file)))
{
ZipEntry theEntry;
while ((theEntry = s.GetNextEntry()) != null)
{
string directoryName = dirToUnzipTo;
string fileName = Path.GetFileName(theEntry.Name);
string fileWDir = directoryName + fileName;
unzippedfile = fileWDir;
if (fileName != String.Empty)
{
using (FileStream streamWriter = File.Create(fileWDir))
{
int size = 2048;
byte[] data = new byte[2048];
while (true)
{
size = s.Read(data, 0, data.Length);
if (size > 0)
{
streamWriter.Write(data, 0, size);
}
else
{
break;
}
}
}
}
}
}
}
catch (Exception ex)
{
LogStatus.WriteErrorLog(ex, "ERROR", "DOViewer.UnZipFile");
}
return (unzippedfile);
}
This is a lot simpler to do with DotNetZip.
using (ZipFile zip = ZipFile.Read(ExistingZipFile))
{
zip.ExtractAll(TargetDirectory);
}
If you want to decide on which files to extract ....
using (ZipFile zip = ZipFile.Read(ExistingZipFile))
{
foreach (ZipEntry e in zip)
{
if (wantThisFile(e.FileName)) e.Extract(TargetDirectory);
}
}
If you would like to overwrite existing files during extraction:
using (ZipFile zip = ZipFile.Read(ExistingZipFile))
{
zip.ExtractAll(TargetDirectory, ExtractExistingFileAction.OverwriteSilently);
}
Or, to extract password-protected entries:
using (ZipFile zip = ZipFile.Read(ExistingZipFile))
{
zip.Password = "Shhhh, Very Secret!";
zip.ExtractAll(TargetDirectory, ExtractExistingFileAction.OverwriteSilently);
}

Categories

Resources