While trying to upload files to SharePoint online, remotely via SharePointClient upload, I am encountering a file size limit of 2mb. From my searches it seems that people have overcome this limit using PowerShell, but is there a way to overcome this limit using the native SharePointClient package in .Net C#? Here is my existing code sample:
using (var ctx = new Microsoft.SharePoint.Client.ClientContext(httpUrl))
{
ctx.Credentials = new Microsoft.SharePoint.Client.SharePointOnlineCredentials(username, passWord);
try
{
string uploadFilename = string.Format(#"{0}.{1}", string.IsNullOrWhiteSpace(filename) ? submissionId : filename, formatExtension);
logger.Info(string.Format("SharePoint uploading: {0}", uploadFilename));
new SharePointClient().Upload(ctx, sharePointDirectoryPath, uploadFilename, formatData);
}
}
I have read from the following site that you can use the ContentStream just not sure how that maps to SharePointClient (if at all):
https://msdn.microsoft.com/en-us/pnp_articles/upload-large-files-sample-app-for-sharepoint
UPDATE:
Per the suggested solution I now have:
public void UploadDocumentContentStream(ClientContext ctx, string libraryName, string filePath)
{
Web web = ctx.Web;
using (FileStream fs = new FileStream(filePath, FileMode.Open))
{
FileCreationInformation flciNewFile = new FileCreationInformation();
// This is the key difference for the first case - using ContentStream property
flciNewFile.ContentStream = fs;
flciNewFile.Url = System.IO.Path.GetFileName(filePath);
flciNewFile.Overwrite = true;
List docs = web.Lists.GetByTitle(libraryName);
Microsoft.SharePoint.Client.File uploadFile = docs.RootFolder.Files.Add(flciNewFile);
ctx.Load(uploadFile);
ctx.ExecuteQuery();
}
}
Still not quite working, but will update again when it is successful. Current error is :
Could not find file 'F:approot12-09-2017.zip'.
FINALLY
I am using files from Amazon S3 so the solution was to take my byte data and to stream that to the call:
public void UploadDocumentContentStream(ClientContext ctx, string libraryName, string filename, byte[] data)
{
Web web = ctx.Web;
FileCreationInformation flciNewFile = new FileCreationInformation();
flciNewFile.ContentStream = new MemoryStream(data); ;
flciNewFile.Url = filename;
flciNewFile.Overwrite = true;
List docs = web.Lists.GetByTitle(libraryName);
Microsoft.SharePoint.Client.File uploadFile = docs.RootFolder.Files.Add(flciNewFile);
ctx.Load(uploadFile);
ctx.ExecuteQuery();
}
You can use FileCreationInformation to create a new file and provide the contents via a FileStream. You can then add the file to the destination library. This should help you get around 2mb limit you are encountering with upload method you are using. Example below:
FileCreationInformation newFile = new FileCreationInformation
{
Url = fileName,
Overwrite = false,
ContentStream = new FileStream(fileSourcePath, FileMode.Open)
};
var createdFile = list.RootFolder.Files.Add(newFile);
ctx.Load(createdFile);
ctx.ExecuteQuery();
In the example the destination library is list you will need to get reference to this first. I can show you how to do this if required.
Related
My app requires copying file using SFTP from a location directly to Azure storage.
Our app is using C# with .NET 4.6 and our WinSCP version is 5.21.1.
My old code works using Session.GetFileToDirectory() method, but the problem is it need to store the file on temp folder inside our hosting.
using (Session session = new Session())
{
session.Open(sessionOptions);
TransferOptions transferOptions = new TransferOptions();
transferOptions.TransferMode = TransferMode.Binary;
var transfer = session.GetFileToDirectory(FilePath, fullPath);
using (Stream stream = File.OpenRead(transfer.Destination))
{
UploadToAzure(stream, Filename, Foldername);
}
}
As we planned to entirely use Azure storage, I change my code like this
using (Session session = new Session())
{
session.Open(sessionOptions);
TransferOptions transferOptions = new TransferOptions();
transferOptions.TransferMode = TransferMode.Binary;
using (Stream stream = session.GetFile(FilePath, transferOptions))
{
UploadToAzure(stream, Filename, Foldername);
}
}
Here my library that uploads the file using Stream to Azure.
This code is working fine using my old code that still save to temp folder before send to Azure.
public static string UploadToAzure(Stream attachment, string Filename, string Foldername)
{
System.Net.ServicePointManager.SecurityProtocol = System.Net.SecurityProtocolType.Tls12;
var connectionString = $"{ConfigurationManager.AppSettings["AzureFileShareConnectionString"]}";
string shareName = $"{ConfigurationManager.AppSettings["AzureFileShareFolderName"]}";
string dirName = $"files\\{Foldername}";
string fileName = Filename;
try
{
ShareClient share = new ShareClient(connectionString, shareName);
share.CreateIfNotExists();
ShareDirectoryClient directory = share.GetDirectoryClient(dirName);
directory.CreateIfNotExists();
// Get a reference to a file and upload it
ShareFileClient file = directory.GetFileClient(fileName);
file.Create(attachment.Length);
file.UploadRange(
new HttpRange(0, attachment.Length), attachment);
}
catch (Exception e)
{
return $"Uploaded {Filename} failed : {e.ToString()}";
}
return $"{Filename} Uploaded";
}
But currently my new code not working with error message
'((WinSCP.PipeStream)stream).Length' threw an exception of type 'System.NotSupportedException'.
This is the object description on creating stream using Session.GetFile method
This is 'exception stacktrace' on sending the empty-stream to Azure
The Stream returned by WinSCP Session.GetFile does not implement the Stream.Length property, because WinSCP cannot guarantee that the size of the file is fixed. The remote file might be changing while you are downloading the file. Not to mention ASCII transfer mode, when the file is converted while being transferred, with unpredictable impact on the final size.
You use the size (Stream.Length) in two places:
When creating the file:
file.Create(attachment.Length);
The parameter of ShareFileClient.Create is maxSize. So it does not look like it's a real size. You can possibly just put an arbitrary large number here.
Or if you prefer (and know that the file is not changing), read the current size of the remote file using Session.GetFileInfo and RemoteFileInfo.Length:
file.Create(session.GetFileInfo(FilePath).Length);
When uploading the contents:
file.UploadRange(new HttpRange(0, attachment.Length), attachment);
The above can be replaced with simple ShareFileClient.Upload:
file.Upload(attachment);
We are using parquet.net to write parquet files. I've set up a simple schema containing 3 columns, and 2 rows:
// Set up the file structure
var UserKey = new Parquet.Data.DataColumn(
new DataField<Int32>("UserKey"),
new Int32[] { 1234, 12345}
);
var AADID = new Parquet.Data.DataColumn(
new DataField<string>("AADID"),
new string[] { Guid.NewGuid().ToString(), Guid.NewGuid().ToString() }
);
var UserLocale = new Parquet.Data.DataColumn(
new DataField<string>("UserLocale"),
new string[] { "en-US", "en-US" }
);
var schema = new Schema(UserKey.Field, AADID.Field, UserLocale.Field
);
When using a FileStream to write to a local file, a file is created, and when the code finishes, I can see two rows in the file (which is 1 kb after):
using (Stream fileStream = System.IO.File.OpenWrite("C:\\Temp\\Users.parquet")) {
using (var parquetWriter = new ParquetWriter(schema, fileStream)) {
// Creare a new row group in the file
using (ParquetRowGroupWriter groupWriter = parquetWriter.CreateRowGroup()) {
groupWriter.WriteColumn(UserKey);
groupWriter.WriteColumn(AADID);
groupWriter.WriteColumn(UserLocale);
}
}
}
Yet, when I attempt to use the same to write to our blob storage, that only generates an empty file, and the data is missing:
// Open reference to Blob Container
CloudAppendBlob blob = OpenBlobFile(blobEndPoint, fileName);
using (MemoryStream stream = new MemoryStream()) {
blob.CreateOrReplaceAsync();
using (var parquetWriter = new ParquetWriter(schema, stream)) {
// Creare a new row group in the file
using (ParquetRowGroupWriter groupWriter = parquetWriter.CreateRowGroup()) {
groupWriter.WriteColumn(UserKey);
groupWriter.WriteColumn(AADID);
groupWriter.WriteColumn(UserLocale);
}
// Set stream position to 0
stream.Position = 0;
blob.AppendBlockAsync(stream);
return true;
}
...
public static CloudAppendBlob OpenBlobFile (string blobEndPoint, string fileName) {
CloudBlobContainer container = new CloudBlobContainer(new System.Uri(blobEndPoint));
CloudAppendBlob blob = container.GetAppendBlobReference(fileName);
return blob;
}
Reading the documentation, I would think my implementation of the blob.AppendBlocAsync should do the trick, but yet I end up with an empty file. Would anyone have suggestions as to why this is and how I can resolve it so I actually end up with data in the file?
Thanks in advance.
The explanation for the file ending up empty is the line:
blob.AppendBlockAsync(stream);
Note how the function called has the Async suffix. This means it expects whatever is calling it to wait. I turned the function the code was in into an Async one, and had Visual Studio suggest the following change to the line:
_ = await blob.AppendBlockAsync(stream);
I'm not entirely certain what _ represents, and hovering my mouse over it doesn't reveal much more, other than it being a long data type, but the code now works as intended.
I have a collection of 400 items that I can download from the repository using the VSTS REST Client Side API.
However I don't understand why i have to make 2nd call to get the content of the item? What is the reason behind this?
Is there a way for my code to be efficient when downloading the content?
Here is the below code:
.. code here to set up the connection etc..
//get the items
var gitItems = gitClient.GetItemsAsync(repo.Id, scopePath: "/DemoWebApp/Source/DemoTests/",
download: false, includeContentMetadata: true, includeLinks: true, recursionLevel: VersionControlRecursionType.Full).Result.Where(x => x.IsFolder == false);
foreach(var gitItem in gitItems)
{
var gitItemUrl = gitItem.Url.Split('?')[0];
var fileName = Path.GetFileName(gitItemUrl);
var fileInfo = new FileInfo(gitItem.Path);
var directoryInfo = fileInfo.Directory;
var subDirectory = directoryInfo.Name;
//get the item's content which is a Stream
var itemContent = gitClient.GetItemContentAsync(repo.Id.ToString(), gitItem.Path).Result;
//working directory where to store the files.
var workingDirectory = string.Format("C:\\Download\\{0}", subDirectory);
//only create the directory if it doesnt exist
if (!Directory.Exists(workingDirectory))
{
Directory.CreateDirectory(workingDirectory);
}
//Actually process the files and generate the output
using (FileStream fs = new FileStream(string.Format(workingDirectory + "\\{0}",fileName), FileMode.Create, FileAccess.Write))
{
itemContent.CopyTo(fileStream);
itemContent.Close();
}
}
GetItemsAsync() methed cannot get content of the items and it does not have "includeContent" parameter either.
If you want to download the items, use GetItemZipAsync() method. It will download all the items under the path you specified as a zip on your machine.
Stream res = ghc.GetItemZipAsync("RepoId", "Path").Result;
using (FileStream fs = new FileStream(string.Format(#"D:\a\1.zip"), FileMode.Create, FileAccess.Write))
{
res.CopyTo(fs);
res.Close();
}
According to the docs -> https://learn.microsoft.com/en-us/rest/api/vsts/git/items/get?view=vsts-rest-4.1
URI Parameters
includeContent
boolean
Set to true to include item content when requesting json. Default is false.
I hope anyone can help me, I'am new in C, I am working on making web APIs that sends files from server based on an input , When I enter an ID , I get all files related to that ID (Indesign files, jpegs, pngs, illustator files ...)
I use Postman to get the resulat and it looks like the attached fileenter image description here :
I have to copy each link and paste it to address bar to download the file.
What I want is to have only one file (Zip) that contains all files.
Is that even doable ? Thanks
I found solution and it's returning a zipfile with the correct files, the issue now that these files have 0Kb size , Help please ?
Here's my code
``
MemoryStream archiveStream = new MemoryStream();
using (ZipArchive archiveFile = new ZipArchive(archiveStream, ZipArchiveMode.Create, true))
{
foreach (var item in assetsNotDuplicated)
{
// create file streams
// add the stream to zip file
var entry = archiveFile.CreateEntry(item.Name);
using (StreamWriter sw = new StreamWriter(entry.Open()))
{
sw.Write(item.Url);
}
}
}
HttpResponseMessage responseMsg = new HttpResponseMessage(HttpStatusCode.OK);
responseMsg.Content = new ByteArrayContent(archiveStream.ToArray());
//archiveStream.Dispose();
responseMsg.Content.Headers.ContentDisposition = new ContentDispositionHeaderValue("attachment") { FileName = "allfiles.zip" };
responseMsg.Content.Headers.ContentType = new MediaTypeHeaderValue("application/zip");
return responseMsg;
}
Basically, the app displays images, and I want the user to be able to select an image for download and store it locally.
I have the URL, but I don't know how to use that url in conjunction with the filepicker.
You can use the following method to download the file from a given Uri to a file selected with a file picker:
private async Task<StorageFile> SaveUriToFile(string uri)
{
var picker = new FileSavePicker();
// set appropriate file types
picker.FileTypeChoices.Add(".jpg Image", new List<string> { ".jpg" });
picker.DefaultFileExtension = ".jpg";
var file = await picker.PickSaveFileAsync();
using (var fileStream = await file.OpenStreamForWriteAsync())
{
var client = new HttpClient();
var httpStream = await client.GetStreamAsync(uri);
await httpStream.CopyToAsync(fileStream);
fileStream.Dispose();
}
return file;
}
I think you can always read the file as a stream and save it bit by bit on the local machine. But I need to say that I've done this many times in JAVA, I never needed to check this in C# :)
SaveFileDialog myFilePicker = new SaveFileDialog();
//put options here like filters or whatever
if (myFilePicker.ShowDialog() == DialogResult.OK)
{
WebClient webClient = new WebClient();
webClient.DownloadFile("http://example.com/picture.jpg", myFilePicker.SelectedFile);
}