Sharepoint to ASP.net MVC file download - c#

I am quite ashamed to ask this question but somehow I am missing something.
Scenario
There is a sharepoint instance
There is a document list in sharepoint with three files
I have a asp.net MVC Portal which connects with Sharepoint instance
In a view it shows the list of files (3 in my case)
When user clicks on the item, the file is to be downloaded.
Problem
The file is downloaded but when you try to open it, word says the file downloaded is corrupt.
I have googled it and tried every variation of code. Only variation that works is to save the file on server and then download it to the client which as you know is not feasible
this is my code
As mentioned above the Sharepoint login,authentication etc all works correctly
fileref is the sharepoint path of the file
len is retrieved from Sharepoint
//int len= int.Parse(oListItemDoc.FieldValues["File_x0020_Size"].ToString());
string filePath = fileRef;
ClientContext clientContext = new ClientContext(GetSharePointUrl());
clientContext = SharepointAuthorisation(clientContext);
if (!string.IsNullOrEmpty(filePath))
{
var cd = new System.Net.Mime.ContentDisposition
{
FileName = Path.GetFileName(fileRef),
// always prompt the user for downloading, set to true if you want
// the browser to try to show the file inline
Inline = true,
};
Response.AppendHeader("Content-Disposition", cd.ToString());
byte[] fileArr=DownloadFile(title, clientContext, filePath,len,extension, "");
//FileInformation fileInfo = Microsoft.SharePoint.Client.File.OpenBinaryDirect(clientContext, fileRef.ToString());
//byte[] arr = new byte[len];
//fileInfo.Stream.Read(arr, 0, arr.Length - 1);
//return arr;
Response.AppendHeader("Content-Disposition", cd.ToString());
//return new FileStreamResult(fileInfo.Stream, "application /octet-stream");// vnd.openxmlformats-officedocument.wordprocessingml.document");
return File(fileArr, "application/docx" , Path.GetFileName(fileRef));
}
else
{
return null;
}
public byte[] DownloadFile(string title, ClientContext clientContext, string fileRef, int len, string itemExtension, string folderName)// Renamed Function Name getdownload to DownloadFiles
{
if (itemExtension == ".pdf")
{
//string completePath = Path.Combine(Server.MapPath("~"), folderName);
//string PdfFile = completePath + "/" + "PDF";
////if (!Directory.Exists(PdfFile))
//{
// Directory.CreateDirectory(PdfFile);
//}
FileInformation fileInfo = Microsoft.SharePoint.Client.File.OpenBinaryDirect(clientContext, fileRef.ToString());
byte[] arr = new byte[len];
fileInfo.Stream.Read(arr, 0, arr.Length);
return arr;
}
else
{
FileInformation fileInfo = Microsoft.SharePoint.Client.File.OpenBinaryDirect(clientContext, fileRef.ToString());
byte[] arr = new byte[len];
fileInfo.Stream.Read(arr, 0, arr.Length);
return arr;
}
}
What am I missing?

Probably it occurs since file size is determined incorrectly. Try to remove any dependency to file size from DownloadFile method as demonstrated below:
public static byte[] DownloadFile(ClientContext ctx,string fileUrl)
{
var fileInfo = Microsoft.SharePoint.Client.File.OpenBinaryDirect(ctx, fileUrl);
using (var ms = new MemoryStream())
{
fileInfo.Stream.CopyTo(ms);
return ms.ToArray();
}
}

Related

How to add existing file (image/mp4/pdf) to a ziparchive? - .Net Framework 4.6.2

I am trying to implement a "Download All" button that will zip up a selection of files from the server and return them as a zip file download. With the code below, I have the zip file being created. The expected files are inside, with the filenames expected, but the contents of the zipped files appears to be corrupted.
public ActionResult DownloadAll(Guid id)
{
var assets = db.InviteAssets.Include(i => i.AssetPages).Where(w => w.InviteID == id).ToList();
var cd = new System.Net.Mime.ContentDisposition
{
// for example foo.bak
FileName = "allAssets.zip",
// always prompt the user for downloading, set to true if you want
// the browser to try to show the file inline
Inline = false,
};
Response.AppendHeader("Content-Disposition", cd.ToString());
using (var memoryStream = new MemoryStream())
{
using (var archive = new ZipArchive(memoryStream, ZipArchiveMode.Create, true))
{
foreach (var asset in assets)
{
string path, extension, name;
if (asset.AssetType != AssetType.PDF)
{
path = asset.AssetPages.First(f => f.PageNumber == 1).FilePath;
}
else
{
path = string.Format("/Content/Assets/asset_{0}.pdf", asset.ID);
}
extension = path.Substring(path.IndexOf('.'));
name = "asset" + asset.Order + extension;
var file = archive.CreateEntry(name);
using (var streamWriter = new StreamWriter(file.Open()))
{
using (var fileStream = System.IO.File.Open(Server.MapPath("~" + path), FileMode.Open))
{
int filelength = (int)fileStream.Length;
var filedata = new byte[fileStream.Length];
streamWriter.Write(fileStream.Read(filedata, 0, filelength));
}
}
}
}
return File(memoryStream.ToArray(), "application/json", "allAssets.zip");
}
}
I'm thinking my issue is therefore with this section:
using (var streamWriter = new StreamWriter(file.Open()))
{
using (var fileStream = System.IO.File.Open(Server.MapPath("~" + path), FileMode.Open))
{
int filelength = (int)fileStream.Length;
var filedata = new byte[fileStream.Length];
streamWriter.Write(fileStream.Read(filedata, 0, filelength));
}
}
I keep reading examples that use a method archive.CreateEntryFromFile(filePath, fileName) but no such method is recognised. Has this been deprecated, or requires a higher version of .Net Framework?
Thanks in advance.
The problem is here:
streamWriter.Write(fileStream.Read(filedata, 0, filelength));
You’re reading the file contents into filedata but you’re at the same time writing the return value of Read into the archive, meaning a single int. You need to read and write separately:
fileStream.Read(filedata, 0, filelength));
streamWriter.Write(filedata, 0, filelength);
Or you can use the CreateEntryFromFile extension method in System.IO.Compression.ZipFileExtensions namespace.
I discovered that the reason I couldn't see the CreateEntryFromFile method was because I had not included a reference to System.IO.Compression.FileSystem. Once I added that, I could use CreateEntryFromFile which worked fine.
So now I have: archive.CreateEntryFromFile(Server.MapPath("~" + path), name);
Instead of:
var file = archive.CreateEntry(name);
using (var streamWriter = new StreamWriter(file.Open()))
{
using (var fileStream = System.IO.File.Open(Server.MapPath("~" + path), FileMode.Open))
{
int filelength = (int)fileStream.Length;
var filedata = new byte[fileStream.Length];
fileStream.Read(filedata, 0, filelength);
streamWriter.Write(filedata);
}
}

Looking for workaround due to FileStream.Create UnauthorizedAccessException

I am using FileStream.Create to upload a .csv file onto a server and then read it into a SQL database. Once it is read in, I just delete the file from the folder that it was written to. The goal is to just get the file into the database. This would run fine locally, but I cannot get write access on the new server so I get an UnauthorizedAccessException. I don't think that it is necessary to upload the file to the server to read it into the SQL table, but I am having trouble adjusting the code.
[HttpPost]
public ActionResult UploadValidationTable(HttpPostedFileBase csvFile)
{
var inputFileDescription = new CsvFileDescription
{
SeparatorChar = ',',
FirstLineHasColumnNames = true
};
var cc = new CsvContext();
var filePath = uploadFile(csvFile.InputStream);
var model = cc.Read<Credit>(filePath, inputFileDescription);
try
{
var entity = new Entities();
foreach (var item in model)
{
var tc = new TemporaryCsvUpload
{
Id = item.Id,
Amount = item.Amount,
Date = item.Date,
Number = item.Number,
ReasonId = item.ReasonId,
Notes = item.Notes
};
entity.TemporaryCsvUploads.Add(tc);
}
entity.SaveChanges();
System.IO.File.Delete(filePath);
Here is the uploadFile method:
private string uploadFile(Stream serverFileStream)
{
const string directory = "~/Content/CSVUploads";
var directoryExists = Directory.Exists(Server.MapPath(directory));
if (!directoryExists)
{
Directory.CreateDirectory(Server.MapPath(directory));
}
var targetFolder = Server.MapPath(directory);
var filename = Path.Combine(targetFolder, Guid.NewGuid() + ".csv");
try
{
const int length = 256;
var buffer = new byte[length];
// write the required bytes
using (var fs = new FileStream(filename, FileMode.Create))
{
int bytesRead;
do
{
bytesRead = serverFileStream.Read(buffer, 0, length);
fs.Write(buffer, 0, bytesRead);
} while (bytesRead == length);
}
serverFileStream.Dispose();
return filename;
}
catch (Exception)
{
return string.Empty;
}
}
To sum it up, I am uploading a .csv file to a temporary location, reading it into an object, reading it into a database, then deleting the .csv file out of the temporary location. I am using Linq2Csv to create the object. Can I do this without uploading the file to the server (because I can't get write access)?
According to http://www.codeproject.com/Articles/25133/LINQ-to-CSV-library,
you can read from a StreamReader
Read<T>(StreamReader stream)
Read<T>(StreamReader stream, CsvFileDescription fileDescription)
You can probably use a streamreader (or a stringbuilder) to create your file instead of a csv - Write StringBuilder to Stream
How to take a stringbuilder and convert it to a streamReader?
and then send that to your CSVContext?

Epplus Use Template on Web Project

I have been working with epplus on .NET desktop projects (C#) and using templates like this:
var package = new ExcelPackage(new FileInfo("C:\\Templates\\FormatoReporteSamsung.xlsx"))
But now I'working with a .NET Web Project (C#) and i don't know what make to refer to the template that exist like a web resource where the URI of that resource like this:
http://myownweb:29200/Content/excelTemplates/Formato.xlsx
At the end I pass the excel template as a stream using this code.
using (var package = new ExcelPackage(new MemoryStream(GetBytesTemplate(FullyQualifiedApplicationPath + "Content/excelTemplates/Format.xlsx"))))
{
//Write data to excel
//Read file like byte array to return a response
Response.Clear();
Response.ContentType = "application/xlsx";
Response.AddHeader("content-disposition", "attachment; filename=" + "myFileName" + ".xlsx");
Response.BinaryWrite(package.GetAsByteArray());
Response.End();
}
To read the excel file has bytes I use this
Error "This stream does not support seek operations" in C#
private byte[] GetBytesTemplate(string url)
{
HttpWebRequest myReq = (HttpWebRequest)WebRequest.Create(url);
WebResponse myResp = myReq.GetResponse();
byte[] b = null;
using (Stream stream = myResp.GetResponseStream())
using (MemoryStream ms = new MemoryStream())
{
int count = 0;
do
{
byte[] buf = new byte[1024];
count = stream.Read(buf, 0, 1024);
ms.Write(buf, 0, count);
} while (stream.CanRead && count > 0);
b = ms.ToArray();
}
return b;
}
And to get the name of the website i use
http://devio.wordpress.com/2009/10/19/get-absolut-url-of-asp-net-application/
public string FullyQualifiedApplicationPath
{
get
{
//Return variable declaration
string appPath = null;
//Getting the current context of HTTP request
HttpContext context = HttpContext.Current;
//Checking the current context content
if (context != null)
{
//Formatting the fully qualified website url/name
appPath = string.Format("{0}://{1}{2}{3}",
context.Request.Url.Scheme,
context.Request.Url.Host,
context.Request.Url.Port == 80
? string.Empty : ":" + context.Request.Url.Port,
context.Request.ApplicationPath);
}
if (!appPath.EndsWith("/"))
appPath += "/";
return appPath;
}
}

How to zip Excel file before it is downloading using Open XML?

I am using below code to create and it will show user prompt to user whether the user can able to save or open or cancel a excel file.....
I am successfully able to download the file but I need to zip before it is showing user prompt, Later zip file will be showed to the user like with options open or save or cancel.....
How can I do that with not using any other third party library and using Microsoft own Gzip DLL?
The below code is for exporting to excel functionality:
public ActionResult ExportToExcel()
{
byte[] file;
string targetFilename = string.Format("{0}-{1}.xlsx", "Generated", "excel");
DataTable dt = common.CreateExcelFile.ListToDataTable(GetSearchDraftPRResults());
common.CreateExcelFile excelFileForExport = new CreateExcelFile();
file = excelFileForExport.CreateExcelDocumentAsStream(dt, targetFilename);
Response.Buffer = true;
return File(file, "application/vnd.ms-excel", targetFilename);
}
Would anyone please help on this how to zip a file before it is showing to user?
Many thanks in advance.....
Modified Code:
public ActionResult ExportToExcel()
{
byte[] file;
string targetFilename = string.Format("{0}-{1}.xlsx", "Generated", "excel");
DataTable dt = common.CreateExcelFile.ListToDataTable(GetSearchDraftPRResults());
common.CreateExcelFile excelFileForExport = new CreateExcelFile();
file = excelFileForExport.CreateExcelDocumentAsStream(dt, targetFilename);
Response.Buffer = true;
byte[] zipFile = Compress(file);
return File(file, "application/vnd.ms-excel", targetFilename);
}
public byte[] Compress(FileInfo fileToCompress)
{
using (FileStream originalFileStream = fileToCompress.OpenRead())
{
if ((System.IO.File.GetAttributes(fileToCompress.FullName) & FileAttributes.Hidden) != FileAttributes.Hidden & fileToCompress.Extension != ".gz")
{
using (FileStream compressedFileStream = System.IO.File.Create(fileToCompress.FullName + ".gz"))
{
using (GZipStream compressionStream = new GZipStream(compressedFileStream, CompressionMode.Compress))
{
originalFileStream.CopyTo(compressionStream);
}
}
}
MemoryStream mem = new MemoryStream();
CopyStream(originalFileStream, mem);
return mem.ToArray();
}
}
public static void CopyStream(Stream input, Stream output)
{
byte[] b = new byte[32768];
int r;
while ((r = input.Read(b, 0, b.Length)) > 0)
output.Write(b, 0, r);
}
Check out the SharpZipLib library. It works very well and is free to use even in commercial applications.
You can use JZlib from JCraft. Very easy to use, compression declaration can look like this, the code inside depends on what's you doing but you can find working example in JZlib examples:
public byte[] compress(byte[] buf, int start, int[] len) {
...
}

Creating a dynamic zip of a bunch of URLs on the fly

I am trying to create a zip file of any size on the fly. The source of the zip archive is a bunch of URLs and could be potentially large (500 4MB JPGs in the list). I want to be able to do everything inside the request and have the download start right away and have the zip created and streamed as it is built. It should not have to reside in memory or on disk on the server.
The closest I have come is this:
Note: urls is a keyvaluepair of URLs to the file names as they should exist in the created zip
Response.ClearContent();
Response.ClearHeaders();
Response.ContentType = "application/zip";
Response.AddHeader("Content-Disposition", "attachment; filename=DyanmicZipFile.zip");
using (var memoryStream = new MemoryStream())
{
using (var archive = new ZipArchive(memoryStream, ZipArchiveMode.Create, true))
{
foreach (KeyValuePair<string, string> fileNamePair in urls)
{
var zipEntry = archive.CreateEntry(fileNamePair.Key);
using (var entryStream = zipEntry.Open())
using (WebClient wc = new WebClient())
wc.OpenRead(GetUrlForEntryName(fileNamePair.Key)).CopyTo(entryStream);
//this doesn't work either
//using (var streamWriter = new StreamWriter(entryStream))
// using (WebClient wc = new WebClient())
// streamWriter.Write(wc.OpenRead(GetUrlForEntryName(fileNamePair.Key)));
}
}
memoryStream.WriteTo(Response.OutputStream);
}
HttpContext.Current.ApplicationInstance.CompleteRequest();
This code gives me a zip file, but each JPG file inside the zip is just a text file that says "System.Net.ConnectStream" I have other attempts on this that do build a zip file with the proper files inside, but you can tell by the delay at the beginning that the server is completely building the zip in memory and then blasting it down at the end. It doesn't respond at all when the file count gets near 50. The part in comments gives me the same result I have tried Ionic.Zip as well.
This is .NET 4.5 on IIS8. I am building with VS2013 and trying to run this on AWS Elastic Beanstalk.
So to answer my own question - here is the solution that works for me:
private void ProcessWithSharpZipLib()
{
byte[] buffer = new byte[4096];
ICSharpCode.SharpZipLib.Zip.ZipOutputStream zipOutputStream = new ICSharpCode.SharpZipLib.Zip.ZipOutputStream(Response.OutputStream);
zipOutputStream.SetLevel(0); //0-9, 9 being the highest level of compression
zipOutputStream.UseZip64 = ICSharpCode.SharpZipLib.Zip.UseZip64.Off;
foreach (KeyValuePair<string, string> fileNamePair in urls)
{
using (WebClient wc = new WebClient())
{
using (Stream wcStream = wc.OpenRead(GetUrlForEntryName(fileNamePair.Key)))
{
ICSharpCode.SharpZipLib.Zip.ZipEntry entry = new ICSharpCode.SharpZipLib.Zip.ZipEntry(ICSharpCode.SharpZipLib.Zip.ZipEntry.CleanName(fileNamePair.Key));
zipOutputStream.PutNextEntry(entry);
int count = wcStream.Read(buffer, 0, buffer.Length);
while (count > 0)
{
zipOutputStream.Write(buffer, 0, count);
count = wcStream.Read(buffer, 0, buffer.Length);
if (!Response.IsClientConnected)
{
break;
}
Response.Flush();
}
}
}
}
zipOutputStream.Close();
Response.Flush();
Response.End();
}
You're trying to create a zip file and have it stream while it's being created. This turns out to be very difficult.
You need to understand the Zip file format. In particular, notice that a local file entry has header fields that can't be updated (CRC, compressed and uncompressed file sizes) until the entire file has been compressed. So at minimum you'll have to buffer at least one entire file before sending it to the response stream.
So at best you could do something like:
open archive
for each file
create entry
write file to entry
read entry raw data and send to the response output stream
The problem you'll run into is that there's no documented way (and no undocumented way that I'm aware of) to read the raw data. The only read method ends up decompressing the data and throwing away the headers.
There might be some other zip library available that can do what you need. I wouldn't suggest trying to do it with ZipArchive.
There must be a way in the zip component you are using that allows for delayed addition of entries to the archive, ie. adding them after the zip.Save() is called. I am using IonicZip using the delayed technique, The code to download flickr albums looks like this:
protected void Page_Load(object sender, EventArgs e)
{
if (!IsLoggedIn())
Response.Redirect("/login.aspx");
else
{
// this is dco album id, find out what photosetId it maps to
string albumId = Request.Params["id"];
Album album = findAlbum(new Guid(albumId));
Flickr flickr = FlickrInstance();
PhotosetPhotoCollection photos = flickr.PhotosetsGetPhotos(album.PhotosetId, PhotoSearchExtras.OriginalUrl | PhotoSearchExtras.Large2048Url | PhotoSearchExtras.Large1600Url);
Response.Clear();
Response.BufferOutput = false;
// ascii only
//string archiveName = album.Title + ".zip";
string archiveName = "photos.zip";
Response.ContentType = "application/zip";
Response.AddHeader("content-disposition", "attachment; filename=" + archiveName);
int picCount = 0;
string picNamePref = album.PhotosetId.Substring(album.PhotosetId.Length - 6);
using (ZipFile zip = new ZipFile())
{
zip.CompressionMethod = CompressionMethod.None;
zip.CompressionLevel = Ionic.Zlib.CompressionLevel.None;
zip.ParallelDeflateThreshold = -1;
_map = new Dictionary<string, string>();
foreach (Photo p in photos)
{
string pictureUrl = p.Large2048Url;
if (string.IsNullOrEmpty(pictureUrl))
pictureUrl = p.Large1600Url;
if (string.IsNullOrEmpty(pictureUrl))
pictureUrl = p.LargeUrl;
string pictureName = picNamePref + "_" + (++picCount).ToString("000") + ".jpg";
_map.Add(pictureName, pictureUrl);
zip.AddEntry(pictureName, processPicture);
}
zip.Save(Response.OutputStream);
}
Response.Close();
}
}
private volatile Dictionary<string, string> _map;
protected void processPicture(string pictureName, Stream output)
{
HttpWebRequest request = (HttpWebRequest)HttpWebRequest.Create(_map[pictureName]);
using (HttpWebResponse response = (HttpWebResponse)request.GetResponse())
{
using (Stream input = response.GetResponseStream())
{
byte[] buf = new byte[8092];
int len;
while ( (len = input.Read(buf, 0, buf.Length)) > 0)
output.Write(buf, 0, len);
}
output.Flush();
}
}
This ways the code in Page_Load gets to zip.Save() immediately, the download starts (the client is presented with the "Save As" box, and only then the images are pulled from flickr.
This code working fine but when I host my code on windows azure as cloud service it corrupts my zip file throwing message invalid file
private void ProcessWithSharpZipLib(){
byte[] buffer = new byte[4096];
ICSharpCode.SharpZipLib.Zip.ZipOutputStream zipOutputStream = new ICSharpCode.SharpZipLib.Zip.ZipOutputStream(Response.OutputStream);
zipOutputStream.SetLevel(0); //0-9, 9 being the highest level of compression
zipOutputStream.UseZip64 = ICSharpCode.SharpZipLib.Zip.UseZip64.Off;
foreach (KeyValuePair<string, string> fileNamePair in urls)
{
using (WebClient wc = new WebClient())
{
using (Stream wcStream = wc.OpenRead(GetUrlForEntryName(fileNamePair.Key)))
{
ICSharpCode.SharpZipLib.Zip.ZipEntry entry = new ICSharpCode.SharpZipLib.Zip.ZipEntry(ICSharpCode.SharpZipLib.Zip.ZipEntry.CleanName(fileNamePair.Key));
zipOutputStream.PutNextEntry(entry);
int count = wcStream.Read(buffer, 0, buffer.Length);
while (count > 0)
{
zipOutputStream.Write(buffer, 0, count);
count = wcStream.Read(buffer, 0, buffer.Length);
if (!Response.IsClientConnected)
{
break;
}
Response.Flush();
}
}
}
}
zipOutputStream.Close();
Response.Flush();
Response.End();
}
This code is working fine on local machine but not after deployed on server. It corrupts my zip file if its large in size.

Categories

Resources