I am having troubles uploading large files to my sharepoint 2013/office 365 site. I am using Visual Stuidos 2010 and .NET 4.0
I have tried code from these questions:
SP2010 Client Object Model 3 MB limit - updating maxReceivedMessageSize doesnt get applied
maximum file upload size in sharepoint
Upload large files 100mb+ to Sharepoint 2010 via c# Web Service
How to download/upload files from/to SharePoint 2013 using CSOM?
But nothing is working. So I need a little help. Here is code that I have tried:
1: ( I have also tried to use SharePointOnlineCredentials instead of NetworkCredential for this one)
#region 403 forbidden
byte[] content = System.IO.File.ReadAllBytes(fileInfo.FullName);
System.Net.WebClient webclient = new System.Net.WebClient();
System.Uri uri = new Uri(sharePointSite + directory + fileInfo.Name);
webclient.Credentials = new NetworkCredential(user, password.ToString(), sharePointSite + "Documents");
webclient.UploadData(uri, "PUT", content);
#endregion
2:
#region 500 Internal Server Error
using (var fs = new FileStream(fileInfo.FullName, FileMode.Open))
{
Microsoft.SharePoint.Client.File.SaveBinaryDirect(
context,
web.ServerRelativeUrl + "/" + directory,
fs,
true);
}
#endregion
I have gotten smaller file uploads to work with:
#region File upload for smaller files
Folder folder = context.Web.GetFolderByServerRelativeUrl(web.ServerRelativeUrl + directory);
web.Context.Load(folder);
context.ExecuteQuery();
FileCreationInformation fci = new FileCreationInformation();
fci.Content = System.IO.File.ReadAllBytes(fileInfo.FullName);
fciURL = sharePointSite + directory;
fciURL += (fciURL[fciURL.Length - 1] == '/') ? fileInfo.Name : "/" + fileInfo.Name;
fci.Url = fciURL;
fci.Overwrite = true;
Microsoft.SharePoint.Client.FileCollection documentfiles = folder.Files;
context.Load(documentfiles);
context.ExecuteQuery();
Microsoft.SharePoint.Client.File file = documentfiles.Add(fci);
context.Load(file);
context.ExecuteQuery();
#endregion
My Using Statement:
using (Microsoft.SharePoint.Client.ClientContext context = new Microsoft.SharePoint.Client.ClientContext(sharePointSite))
{
//string fciURL = "";
exception = "";
context.Credentials = new Microsoft.SharePoint.Client.SharePointOnlineCredentials(user, password);
Web web = context.Web;
web.Context.Credentials = context.Credentials;
if (!web.IsPropertyAvailable("ServerRelativeUrl"))
{
web.Context.Load(web, w => w.ServerRelativeUrl);
web.Context.ExecuteQuery();
}
//upload large file
}
The solution I went with:
MemoryStream destStream;
using (System.IO.FileStream fInfo = new FileStream(fileInfo.FullName, FileMode.Open))
{
byte[] buffer = new byte[16 * 1024];
byte[] byteArr;
using (MemoryStream ms = new MemoryStream())
{
int read;
while ((read = fInfo.Read(buffer, 0, buffer.Length)) > 0)
{
ms.Write(buffer, 0, read);
}
byteArr = ms.ToArray();
}
destStream = new MemoryStream(byteArr);
Microsoft.SharePoint.Client.File.SaveBinaryDirect(
context,
serverRelativeURL + directory + fileInfo.Name,
destStream,
true);
context.ExecuteQuery();
results = "File Uploaded";
return true;
}
The problem with your code snippet number 2 is that you missed a file name:
using (var fs = new FileStream(fileInfo.FullName, FileMode.Open))
{
Microsoft.SharePoint.Client.File.SaveBinaryDirect(
context,
serverRelativeURL + directory + fs.Name,
^^^^^^^^
fs,
true);
}
My research on the subject showed that using the FrontPage Remote Control Procedures was the most adventageous way of reliably uploading large files.
This is because FrontPage RPC supports file fragmentation, which helps avoid OutOfMemomory exceptions due to Windows needing to allocate the entire file to continuous memory.
It also supports sending meta data, useful it pretty much any file upload application. One major advantage of this is that you can actually specify the correct content type without a user having to log in and change it later. (with all other methods I tried it would just be set as the default type.)
See my answer on the Sharepoint StackExchange for further detail on implementing Frontpage RPC.
Related
I really need some rubber ducking...
I have a file that is at least 2.3 GiB.
I am currently downloading this file to a temp directory.
But when the download is interrupted (connection error, or windows crash) I want the user to resume download where it stopped. And not download the whole file all over again.
The code works in the fact that it continues downloading the file, but I see that the download stream is starting from the beginning again. So that means that the file ends to be (2.3 GiB + the amount bytes that were downloaded previously), which ofc corrupts my file.
I used the following snippet to resume downloading, so I hoped the stream would resume, where it stopped
localStream.Seek(positionInFile, SeekOrigin.Begin);
Any ideas on what I am missing here?
Here is my code.
BlobContainerClient containerClient = new BlobContainerClient(connectionString, container);
var blobClient = containerClient.GetBlobClient(downloadFile);
fullOutputPath = createOutputFilePath(updateFileUri.OriginalString, outputFolder);
downloadFileInfo = new FileInfo(fullOutputPath);
var response = blobClient.Download(cts.Token);
contentLength = response.Value.ContentLength;
if (contentLength.HasValue && contentLength.Value > 0)
{
if (_fileSystemService.FileExists(fullOutputPath))
{
from = downloadFileInfo.Length;
to = contentLength;
if (from == to)
{
//file is already downloaded
//skip it
progress.Report(1);
return;
}
fileMode = FileMode.Open;
positionInFile = downloadFileInfo.Length;
}
using FileStream localStream = _fileSystemService.CreateFile(fullOutputPath, fileMode, FileAccess.Write);
localStream.Seek(positionInFile, SeekOrigin.Begin);
bytesDownloaded = positionInFile;
double dprog = ((double)bytesDownloaded / (double)(contentLength.Value + positionInFile));
do
{
bytesRead = await response.Value.Content.ReadAsync(buffer, 0, buffer.Length, cts.Token);
await localStream.WriteAsync(buffer, 0, bytesRead, cts.Token);
await localStream.FlushAsync();
bytesDownloaded += bytesRead;
dprog = ((double)bytesDownloaded / (double)(contentLength.Value + positionInFile));
progress.Report(dprog);
} while (bytesRead > 0);
}
I did some test for you, in my case, I use a .txt file to demo your requirement. You can see the .txt file here.
As you can see, at line 151, I made an end mark:
I also created a local file that ends with this end mark to emulate that download is interrupted and we will continue to download from storage:
This is my code for fast demo below:
static void Main(string[] args)
{
string containerName = "container name";
string blobName = ".txt file name";
string storageConnStr = "storage account conn str";
string localFilePath = #"local file path";
var localFileStream = new FileStream(localFilePath, FileMode.Append);
var localFileLength = new FileInfo(localFilePath).Length;
localFileStream.Seek(localFileLength, SeekOrigin.Begin);
var blobServiceClient = new BlobServiceClient(storageConnStr);
var blobClient = blobServiceClient.GetBlobContainerClient(containerName).GetBlobClient(blobName);
var stream = blobClient.Download(new Azure.HttpRange(localFileLength)).Value.Content;
var contentStrting = new StreamReader(stream).ReadToEnd();
Console.WriteLine(contentStrting);
localFileStream.Write(Encoding.ASCII.GetBytes(contentStrting));
localFileStream.Flush();
}
Result:
We only downloaded the content behind the end mark:
Content has been downloaded to local .txt file:
Pls let me know if you have any more questions.
Although I can create rdlc report in debug mode, I encounter an error "Access to the path 'C:\xxx.xlsx' is denied." After looking on the web for workaround, I see that lots of the solutions suggest to give permission to the C drive for IIS user. However, it does not seem to be wisely to give permission to entire drive for just rendering a report. So, how to change this render location i.e. C:\inetpub\MyApplication? On the other hand I think there is no settings needed on reporting side i.e. ReportViewer.ProcessingMode = ProcessingMode.Local; or changing rdlc properties "Build Action", "Copy Output Directory"?
Note: I do not want the reports to be rendered on client's machine as some of them has no right to write any location under C:\ and I think generating reports on IIS location is much better. Is not it?
So, what is the best solution in this situation?
Update: How can I modify this method so that it just read the stream as excel withou writing it?
public static void StreamToProcess(Stream readStream)
{
var writeStream = new FileStream(String.Format("{0}\\{1}.{2}", Environment.GetFolderPath(Environment.SpecialFolder.InternetCache), "MyFile", "xlsx"), FileMode.Create, FileAccess.Write);
const int length = 16384;
var buffer = new Byte[length];
var bytesRead = readStream.Read(buffer, 0, length);
while (bytesRead > 0)
{
writeStream.Write(buffer, 0, bytesRead);
bytesRead = readStream.Read(buffer, 0, length);
}
readStream.Close();
writeStream.Close();
Process.Start(Environment.GetFolderPath(Environment.SpecialFolder.InternetCache) + "\\" + "file" + "." + "xlsx");
}
Here is how we render Excel files from an rdlc without saving it to a server folder. Just call the action and it will download to the user's browser.
public FileStreamResult ExcelReport(int type)
{
var body = _db.MyObjects.Where(x => x.Type == type);
ReportDataSource rdsBody = new ReportDataSource("MyReport", body);
ReportViewer viewer = new ReportViewer
{
ProcessingMode = ProcessingMode.Local
};
viewer.LocalReport.ReportPath = Server.MapPath(#"~\bin\MyReport.rdlc");
viewer.LocalReport.DataSources.Clear();
viewer.LocalReport.DataSources.Add(rdsBody);
viewer.LocalReport.EnableHyperlinks = true;
string filename = string.Format("MyReport_{0}.xls", type);
byte[] bytes = viewer.LocalReport.Render("Excel");
var stream = new MemoryStream(bytes);
return File(stream, "application/ms-excel", filename);
}
I am trying to create a zip file of any size on the fly. The source of the zip archive is a bunch of URLs and could be potentially large (500 4MB JPGs in the list). I want to be able to do everything inside the request and have the download start right away and have the zip created and streamed as it is built. It should not have to reside in memory or on disk on the server.
The closest I have come is this:
Note: urls is a keyvaluepair of URLs to the file names as they should exist in the created zip
Response.ClearContent();
Response.ClearHeaders();
Response.ContentType = "application/zip";
Response.AddHeader("Content-Disposition", "attachment; filename=DyanmicZipFile.zip");
using (var memoryStream = new MemoryStream())
{
using (var archive = new ZipArchive(memoryStream, ZipArchiveMode.Create, true))
{
foreach (KeyValuePair<string, string> fileNamePair in urls)
{
var zipEntry = archive.CreateEntry(fileNamePair.Key);
using (var entryStream = zipEntry.Open())
using (WebClient wc = new WebClient())
wc.OpenRead(GetUrlForEntryName(fileNamePair.Key)).CopyTo(entryStream);
//this doesn't work either
//using (var streamWriter = new StreamWriter(entryStream))
// using (WebClient wc = new WebClient())
// streamWriter.Write(wc.OpenRead(GetUrlForEntryName(fileNamePair.Key)));
}
}
memoryStream.WriteTo(Response.OutputStream);
}
HttpContext.Current.ApplicationInstance.CompleteRequest();
This code gives me a zip file, but each JPG file inside the zip is just a text file that says "System.Net.ConnectStream" I have other attempts on this that do build a zip file with the proper files inside, but you can tell by the delay at the beginning that the server is completely building the zip in memory and then blasting it down at the end. It doesn't respond at all when the file count gets near 50. The part in comments gives me the same result I have tried Ionic.Zip as well.
This is .NET 4.5 on IIS8. I am building with VS2013 and trying to run this on AWS Elastic Beanstalk.
So to answer my own question - here is the solution that works for me:
private void ProcessWithSharpZipLib()
{
byte[] buffer = new byte[4096];
ICSharpCode.SharpZipLib.Zip.ZipOutputStream zipOutputStream = new ICSharpCode.SharpZipLib.Zip.ZipOutputStream(Response.OutputStream);
zipOutputStream.SetLevel(0); //0-9, 9 being the highest level of compression
zipOutputStream.UseZip64 = ICSharpCode.SharpZipLib.Zip.UseZip64.Off;
foreach (KeyValuePair<string, string> fileNamePair in urls)
{
using (WebClient wc = new WebClient())
{
using (Stream wcStream = wc.OpenRead(GetUrlForEntryName(fileNamePair.Key)))
{
ICSharpCode.SharpZipLib.Zip.ZipEntry entry = new ICSharpCode.SharpZipLib.Zip.ZipEntry(ICSharpCode.SharpZipLib.Zip.ZipEntry.CleanName(fileNamePair.Key));
zipOutputStream.PutNextEntry(entry);
int count = wcStream.Read(buffer, 0, buffer.Length);
while (count > 0)
{
zipOutputStream.Write(buffer, 0, count);
count = wcStream.Read(buffer, 0, buffer.Length);
if (!Response.IsClientConnected)
{
break;
}
Response.Flush();
}
}
}
}
zipOutputStream.Close();
Response.Flush();
Response.End();
}
You're trying to create a zip file and have it stream while it's being created. This turns out to be very difficult.
You need to understand the Zip file format. In particular, notice that a local file entry has header fields that can't be updated (CRC, compressed and uncompressed file sizes) until the entire file has been compressed. So at minimum you'll have to buffer at least one entire file before sending it to the response stream.
So at best you could do something like:
open archive
for each file
create entry
write file to entry
read entry raw data and send to the response output stream
The problem you'll run into is that there's no documented way (and no undocumented way that I'm aware of) to read the raw data. The only read method ends up decompressing the data and throwing away the headers.
There might be some other zip library available that can do what you need. I wouldn't suggest trying to do it with ZipArchive.
There must be a way in the zip component you are using that allows for delayed addition of entries to the archive, ie. adding them after the zip.Save() is called. I am using IonicZip using the delayed technique, The code to download flickr albums looks like this:
protected void Page_Load(object sender, EventArgs e)
{
if (!IsLoggedIn())
Response.Redirect("/login.aspx");
else
{
// this is dco album id, find out what photosetId it maps to
string albumId = Request.Params["id"];
Album album = findAlbum(new Guid(albumId));
Flickr flickr = FlickrInstance();
PhotosetPhotoCollection photos = flickr.PhotosetsGetPhotos(album.PhotosetId, PhotoSearchExtras.OriginalUrl | PhotoSearchExtras.Large2048Url | PhotoSearchExtras.Large1600Url);
Response.Clear();
Response.BufferOutput = false;
// ascii only
//string archiveName = album.Title + ".zip";
string archiveName = "photos.zip";
Response.ContentType = "application/zip";
Response.AddHeader("content-disposition", "attachment; filename=" + archiveName);
int picCount = 0;
string picNamePref = album.PhotosetId.Substring(album.PhotosetId.Length - 6);
using (ZipFile zip = new ZipFile())
{
zip.CompressionMethod = CompressionMethod.None;
zip.CompressionLevel = Ionic.Zlib.CompressionLevel.None;
zip.ParallelDeflateThreshold = -1;
_map = new Dictionary<string, string>();
foreach (Photo p in photos)
{
string pictureUrl = p.Large2048Url;
if (string.IsNullOrEmpty(pictureUrl))
pictureUrl = p.Large1600Url;
if (string.IsNullOrEmpty(pictureUrl))
pictureUrl = p.LargeUrl;
string pictureName = picNamePref + "_" + (++picCount).ToString("000") + ".jpg";
_map.Add(pictureName, pictureUrl);
zip.AddEntry(pictureName, processPicture);
}
zip.Save(Response.OutputStream);
}
Response.Close();
}
}
private volatile Dictionary<string, string> _map;
protected void processPicture(string pictureName, Stream output)
{
HttpWebRequest request = (HttpWebRequest)HttpWebRequest.Create(_map[pictureName]);
using (HttpWebResponse response = (HttpWebResponse)request.GetResponse())
{
using (Stream input = response.GetResponseStream())
{
byte[] buf = new byte[8092];
int len;
while ( (len = input.Read(buf, 0, buf.Length)) > 0)
output.Write(buf, 0, len);
}
output.Flush();
}
}
This ways the code in Page_Load gets to zip.Save() immediately, the download starts (the client is presented with the "Save As" box, and only then the images are pulled from flickr.
This code working fine but when I host my code on windows azure as cloud service it corrupts my zip file throwing message invalid file
private void ProcessWithSharpZipLib(){
byte[] buffer = new byte[4096];
ICSharpCode.SharpZipLib.Zip.ZipOutputStream zipOutputStream = new ICSharpCode.SharpZipLib.Zip.ZipOutputStream(Response.OutputStream);
zipOutputStream.SetLevel(0); //0-9, 9 being the highest level of compression
zipOutputStream.UseZip64 = ICSharpCode.SharpZipLib.Zip.UseZip64.Off;
foreach (KeyValuePair<string, string> fileNamePair in urls)
{
using (WebClient wc = new WebClient())
{
using (Stream wcStream = wc.OpenRead(GetUrlForEntryName(fileNamePair.Key)))
{
ICSharpCode.SharpZipLib.Zip.ZipEntry entry = new ICSharpCode.SharpZipLib.Zip.ZipEntry(ICSharpCode.SharpZipLib.Zip.ZipEntry.CleanName(fileNamePair.Key));
zipOutputStream.PutNextEntry(entry);
int count = wcStream.Read(buffer, 0, buffer.Length);
while (count > 0)
{
zipOutputStream.Write(buffer, 0, count);
count = wcStream.Read(buffer, 0, buffer.Length);
if (!Response.IsClientConnected)
{
break;
}
Response.Flush();
}
}
}
}
zipOutputStream.Close();
Response.Flush();
Response.End();
}
This code is working fine on local machine but not after deployed on server. It corrupts my zip file if its large in size.
I need to move files from one ftp to another (currently using ftpwebrequest) both requiring authentication and have different settings (timeout, ascii, active etc). Is downloading files from one to a local server and then uploading to the other significant slower than just copying the files (if that exists even, how would you do it, renameto?). It feels like it should be faster but I'm not sure, I have no understanding of file copying or downloading.
they are all .txt or .csv and mostly around 3-10 mb each so quite a bit of data
You can copy a file from FTP-Server A to FTP-Server B using FXP. Both servers and the client have to support that feature.
Some time we need to download, upload file from FTP server. Here is some good example for FTP operation in C#.
You can use this. It will help you to make a C# program to full fill your requirements.
File Download from FTP Server
public void DownloadFile(stringHostURL, string UserName, string Password, stringSourceDirectory, string FileName, string LocalDirectory)
{
if(!File.Exists(LocalDirectory + FileName))
{
try
{
FtpWebRequestrequestFileDownload = (FtpWebRequest)WebRequest.Create(HostURL + “/” + SourceDirectory + “/” + FileName);
requestFileDownload.Credentials = new NetworkCredential(UserName, Password);
requestFileDownload.Method = WebRequestMethods.Ftp.DownloadFile;
FtpWebResponseresponseFileDownload = (FtpWebResponse)requestFileDownload.GetResponse();
StreamresponseStream = responseFileDownload.GetResponseStream();
FileStreamwriteStream = new FileStream(LocalDirectory + FileName, FileMode.Create);
intLength = 2048;
Byte[] buffer = new Byte[Length];
intbytesRead = responseStream.Read(buffer, 0, Length);
while(bytesRead > 0)
{
writeStream.Write(buffer, 0, bytesRead);
bytesRead = responseStream.Read(buffer, 0, Length);
}
responseStream.Close();
writeStream.Close();
requestFileDownload = null;
responseFileDownload = null;
}
catch(Exception ex)
{
throwex;
}
}
}
Some Good Examples
Hope it will help you.
I've multiple files on a ftp server.I do not know the names of these files except that they are all. xml files.
How do I programmatically download these files using .Net's FtpWebRequest?
Thanks.
Most likely you'll have to issue a Dir command that lists out all the files, then go through each one downloading it.
Here is some info on getting a directory listing.
http://msdn.microsoft.com/en-us/library/ms229716.aspx
Take a look at the ListDirectory function. It's the equivalent of the NLIST command in FTP.
You'll probably want to use an existing library like this one rather than write your own.
FtpWebRequest __request = (FtpWebRequest)FtpWebRequest.Create(__requestLocation);
__request.Method = WebRequestMethods.Ftp.ListDirectory;
var __response = (FtpWebResponse)__request.GetResponse();
using (StreamReader __directoryList = new StreamReader(__response.GetResponseStream())) {
string ___line = __directoryList.ReadLine();
while (___line != null) {
if (!String.IsNullOrEmpty(___line)) { __output.Add(___line); }
___line = __directoryList.ReadLine();
}
break;
}
Getting the target file...
FtpWebRequest __request = null;
FtpWebResponse __response = null;
byte[] __fileBuffer = null;
byte[] __outputBuffer = null;
__request = (FtpWebRequest)FtpWebRequest.Create(__requestLocation);
__request.Method = WebRequestMethods.Ftp.DownloadFile;
__response = (FtpWebResponse)__request.GetResponse();
using (MemoryStream __outputStream = new MemoryStream()) {
using (Stream __responseStream = __response.GetResponseStream()) {
using (BufferedStream ___outputBuffer = new BufferedStream(__responseStream)) {
__fileBuffer = new byte[BLOCKSIZE];
int ___readCount = __responseStream.Read(__fileBuffer, 0, BLOCKSIZE);
while (___readCount > 0) {
__outputStream.Write(__fileBuffer, 0, ___readCount);
___readCount = __responseStream.Read(__fileBuffer, 0, BLOCKSIZE);
}
__outputStream.Position = 0;
__outputBuffer = new byte[__outputStream.Length];
//Truncate Buffer to only the specified bytes. Store into output buffer
Array.Copy(__outputStream.GetBuffer(), __outputBuffer, __outputStream.Length);
break;
}
}
}
try { __response.Close(); } catch { }
__request = null;
__response = null;
return __outputBuffer;
Ripped out of some other code I have, so it probably wont compile and run directly.
I don't know if the FtpWebRequest is a strict requirement. If you can use a third party component following code would accomplish your task:
// create client, connect and log in
Ftp client = new Ftp();
client.Connect("ftp.example.org");
client.Login("username", "password");
// download all files in the current directory which matches the "*.xml" mask
// at the server to the 'c:\data' directory
client.GetFiles("*.xml", #"c:\data", FtpBatchTransferOptions.Default);
client.Disconnect();
The code uses Rebex FTP which can be downloaded here.
Disclaimer: I'm involved in the development of this product.