Could you help me for how to add a file to the Sharepoint document library? I found some articles in .NET, but I didn't get the complete concept of how to accomplish this.
I uploaded a file without metadata by using this code:
if (fuDocument.PostedFile != null)
{
if (fuDocument.PostedFile.ContentLength > 0)
{
Stream fileStream = fuDocument.PostedFile.InputStream;
byte[] byt = new byte[Convert.ToInt32(fuDocument.PostedFile.ContentLength)];
fileStream.Read(byt, 0, Convert.ToInt32(fuDocument.PostedFile.ContentLength));
fileStream.Close();
using (SPSite site = new SPSite(SPContext.Current.Site.Url))
{
using (SPWeb webcollection = site.OpenWeb())
{
SPFolder myfolder = webcollection.Folders["My Library"];
webcollection.AllowUnsafeUpdates = true;
myfolder.Files.Add(System.IO.Path.GetFileName(fuDocument.PostedFile.FileName), byt);
}
}
}
}
This code is working fine as is, but I need to upload a file with metadata. Please help me by editing this code if it is possible. I created 3 columns in my document library.
SPFolder.Files.Add returns a SPFile object
SPFile.Item returns an SPListItem object
You can then use SPlistItem["FieldName"] to access each field (see bottom of SPListItem link)
So adding this into your code (this is not tested, but you should get the idea)
SPFile file = myfolder.Files.Add(System.IO.Path.GetFileName(document.PostedFile.FileName);
SPListItem item = file.Item;
item["My Field"] = "Some value for your field";
item.Update()
There is also an overload where you can send in a hashtable with the metadata you want to add. For example:
Hashtable metaData = new Hashtable();
metaData.Add("ContentTypeId", "some CT ID");
metaData.Add("Your Custom Field", "Your custom value");
SPFile file = library.RootFolder.Files.Add(
"filename.fileextension",
bytearray,
metaData,
false);
Related
I'll explain the problem right away, but first of all...is this achievable?
I have a Document Type in Umbraco where I store data from a Form. I can store everything except the file.
...
content.SetValue("notes", item.Notes);
content.SetValue("curriculum", item.Curriculum); /*this is the file*/
...
I'm adding items like this where SetValue comes from the following namespace namespace Umbraco.Core.Models and this is the function signature void SetValue(string propertyTypeAlias, object value)
And the return error is the following
"String or binary data would be truncated.
↵The statement has been terminated."
Did I missunderstood something? Shouldn't I be sending the base64? I'm adding the image to a media file where it creates a sub-folder with a sequential number. If I try to add an existing folder it appends the file just fine but if I point to a new media sub-folder it also returns an error. Any ideas on how should I approach this?
Thanks in advance
Edit 1: After Cryothic answer I've updated my code with the following
byte[] tempByte = Convert.FromBase64String(item.Curriculum);
var mediaFile = _mediaService.CreateMedia(item.cvExtension, -1, Constants.Conventions.MediaTypes.File);
Stream fileStream = new MemoryStream(tempByte);
var fileName = Path.GetFileNameWithoutExtension(item.cvExtension);
mediaFile.SetValue("umbracoFile", fileName, fileStream);
_mediaService.Save(mediaFile);
and the error happens at mediaFile.SetValue(...).
If I upload a file from umbraco it goes to "http://localhost:3295/media/1679/test.txt" and the next one would go to "http://localhost:3295/media/1680/test.txt". Where do I tell on my request that it has to add to the /media folder and increment? Do I only point to the media folder and umbaco handles the incrementation part?
If I change on SetValue to the following mediaFile.SetValue("curriculum", fileName, fileStream); the request succeeds but the file is not added to the content itself and the file is added to "http://localhost:3295/umbraco/media" instead of "http://localhost:3295/media".
If I try the following - content.SetValue("curriculum", item.cvExtension); - the file is added to the content but with the path "http://localhost:3295/umbraco/test.txt".
I'm not understanding very well how umbraco inserts files into the media folder (outside umbraco) and how you add the media service path to the content service.
Do you need to save base64?
I have done something like that, but using the MediaService.
My project had the option to upload multiple images on mulitple wizard-steps, and I needed to save them all at once. So I looped through the uploaded files (HttpFileCollection) per step. acceptedFiletypes is a string-list with the mimetypes I'd allow.
for (int i = 0; i < files.Count; i++) {
byte[] fileData = null;
UploadedFile uf = null;
try {
if (acceptedFiletypes.Contains(files[i].ContentType)) {
using (var binaryReader = new BinaryReader(files[i].InputStream)) {
fileData = binaryReader.ReadBytes(files[i].ContentLength);
}
if (fileData.Length > 0) {
uf = new UploadedFile {
FileName = files[i].FileName,
FileType = fileType,
FileData = fileData
};
}
}
}
catch { }
if (uf != null) {
projectData.UploadedFiles.Add(uf);
}
}
After the last step, I would loop throug my projectData.UploadedFiles and do the following.
var service = Umbraco.Core.ApplicationContext.Current.Services.MediaService;
var mediaTypeAlias = "Image";
var mediaItem = service.CreateMedia(fileName, parentFolderID, mediaTypeAlias);
Stream fileStream = new MemoryStream(file.FileData);
mediaItem.SetValue("umbracoFile", fileName, fileStream);
service.Save(mediaItem);
I also had a check which would see if the uploaded filename was ending on ".pdf". In that case I'd change the mediaTypeAlias to "File".
I hope this helps.
I'm new in this theme but I search a lot I mean really lot before I asked it here. So my problem is that, when I create ListItem in my app and set the Fields that are required, everything looks all right. I can see the ListItem in my List with proper fields if I go to Sharepoint Online. But when I click on the ListItem the Fields are empty and on the top of the page is an error which says "Object reference not set to an instance of an object." I don't know if the issue is happening cos of my code or there is an issue with Sharepoint settings? I followed most of the guides here to go as far as I'm now. My code below.
SP.List targetList = ctx.Web.Lists.GetByTitle("Documents");
ListItemCreationInformation itemCreateInfo = new ListItemCreationInformation();
ListItem newListItem = targetList.AddItem(itemCreateInfo);
newListItem["Title"] = "Test_title";
newListItem["EAN_x0020_k_x00f3_d"] = "AnotherCodeWhichIsRequired";
newListItem.Update();
string strFilePath = #"PathToMyPDF";
byte[] bytes = System.IO.File.ReadAllBytes(strFilePath);
using (System.IO.MemoryStream mStream = new System.IO.MemoryStream(bytes))
{
SP.AttachmentCreationInformation aci = new SP.AttachmentCreationInformation();
aci.ContentStream = mStream;
aci.FileName = System.IO.Path.GetFileNameWithoutExtension(strFilePath);
newListItem.AttachmentFiles.Add(aci);
newListItem.Update();
ctx.ExecuteQuery();
}
It looks like you are using a document library.The code that you have used is for a sharepoint list.
To upload a file to document library and update its properties, modify your code as below:
string strFilePath = #"PathToMyPDF";
FileCreationInformation newFile = new FileCreationInformation();
newFile.Content = System.IO.File.ReadAllBytes(strFilePath);
newFile.Overwrite = true;
SP.List targetList = ctx.Web.Lists.GetByTitle("Documents");
Microsoft.SharePoint.Client.File uploadFile = targetList.RootFolder.Files.Add(newFile);
ctx.Load(uploadFile);
ctx.ExecuteQuery();
uploadFile.ListItemAllFields["Title"] = "Test_title";
uploadFile.ListItemAllFields["EAN_x0020_k_x00f3_d"] = "AnotherCodeWhichIsRequired";
uploadFile.ListItemAllFields.Update();
ctx.ExecuteQuery();
While trying to upload files to SharePoint online, remotely via SharePointClient upload, I am encountering a file size limit of 2mb. From my searches it seems that people have overcome this limit using PowerShell, but is there a way to overcome this limit using the native SharePointClient package in .Net C#? Here is my existing code sample:
using (var ctx = new Microsoft.SharePoint.Client.ClientContext(httpUrl))
{
ctx.Credentials = new Microsoft.SharePoint.Client.SharePointOnlineCredentials(username, passWord);
try
{
string uploadFilename = string.Format(#"{0}.{1}", string.IsNullOrWhiteSpace(filename) ? submissionId : filename, formatExtension);
logger.Info(string.Format("SharePoint uploading: {0}", uploadFilename));
new SharePointClient().Upload(ctx, sharePointDirectoryPath, uploadFilename, formatData);
}
}
I have read from the following site that you can use the ContentStream just not sure how that maps to SharePointClient (if at all):
https://msdn.microsoft.com/en-us/pnp_articles/upload-large-files-sample-app-for-sharepoint
UPDATE:
Per the suggested solution I now have:
public void UploadDocumentContentStream(ClientContext ctx, string libraryName, string filePath)
{
Web web = ctx.Web;
using (FileStream fs = new FileStream(filePath, FileMode.Open))
{
FileCreationInformation flciNewFile = new FileCreationInformation();
// This is the key difference for the first case - using ContentStream property
flciNewFile.ContentStream = fs;
flciNewFile.Url = System.IO.Path.GetFileName(filePath);
flciNewFile.Overwrite = true;
List docs = web.Lists.GetByTitle(libraryName);
Microsoft.SharePoint.Client.File uploadFile = docs.RootFolder.Files.Add(flciNewFile);
ctx.Load(uploadFile);
ctx.ExecuteQuery();
}
}
Still not quite working, but will update again when it is successful. Current error is :
Could not find file 'F:approot12-09-2017.zip'.
FINALLY
I am using files from Amazon S3 so the solution was to take my byte data and to stream that to the call:
public void UploadDocumentContentStream(ClientContext ctx, string libraryName, string filename, byte[] data)
{
Web web = ctx.Web;
FileCreationInformation flciNewFile = new FileCreationInformation();
flciNewFile.ContentStream = new MemoryStream(data); ;
flciNewFile.Url = filename;
flciNewFile.Overwrite = true;
List docs = web.Lists.GetByTitle(libraryName);
Microsoft.SharePoint.Client.File uploadFile = docs.RootFolder.Files.Add(flciNewFile);
ctx.Load(uploadFile);
ctx.ExecuteQuery();
}
You can use FileCreationInformation to create a new file and provide the contents via a FileStream. You can then add the file to the destination library. This should help you get around 2mb limit you are encountering with upload method you are using. Example below:
FileCreationInformation newFile = new FileCreationInformation
{
Url = fileName,
Overwrite = false,
ContentStream = new FileStream(fileSourcePath, FileMode.Open)
};
var createdFile = list.RootFolder.Files.Add(newFile);
ctx.Load(createdFile);
ctx.ExecuteQuery();
In the example the destination library is list you will need to get reference to this first. I can show you how to do this if required.
I am working on a project that requires all SQL connection and query information to be stored in XML files. To make my project configurable, I am trying to create a means to let the user configure his sql connection string information (datasource, catalog, username and password) via a series of text boxes. This input will then be saved to the appropriate node within the SQL document.
I can get the current information from the XML file, and display that information within text boxes for the user's review and correction, but I'm encountering an error when it comes time to save the changes.
Here is the code I'm using to update and save the xml document.
protected void submitBtn_Click(object sender, EventArgs e)
{
SPFile file = methods.web.GetFile("MyXMLFile.xml");
myDoc = new XmlDocument();
byte[] bites = file.OpenBinary();
Stream strm1 = new MemoryStream(bites);
myDoc.Load(strm1);
XmlNode node;
node = myDoc.DocumentElement;
foreach (XmlNode node1 in node.ChildNodes)
{
foreach (XmlNode node2 in node1.ChildNodes)
{
if (node2.Name == "name1")
{
if (node2.InnerText != box1.Text)
{
}
}
if (node2.Name == "name2")
{
if (node2.InnerText != box2.Text)
{
}
}
if (node2.Name == "name3")
{
if (node2.InnerText != box3.Text)
{
node2.InnerText = box3.Text;
}
}
if (node2.Name == "name4")
{
if (node2.InnerText != box4.Text)
{
}
}
}
}
myDoc.Save(strm1);
}
Most of the conditionals are empty at this point because I'm still testing.
The code works great until the last line, as I said. At that point, I get the error "Memory Stream is not expandable." I understand that using a memory stream to update a stored file is incorrect, but I can't figure out the right way to do this.
I've tried to implement the solution given in the similar question at Memory stream is not expandable but that situation is different from mine and so the implementation makes no sense to me. Any clarification would be greatly appreciated.
Using the MemoryStream constructor that takes a byte array as an argument creates a non-resizable instance of a MemoryStream. Since you are making changes to the file (and therefore the underlying bytes), you need a resizable MemoryStream. This can be accomplished by using the parameterless constructor of the MemoryStream class and writing the byte array into the MemoryStream.
Try this:
SPFile file = methods.web.GetFile("MyXMLFile.xml");
myDoc = new XmlDocument();
byte[] bites = file.OpenBinary();
using(MemoryStream strm1 = new MemoryStream()){
strm1.Write(bites, 0, (int)bites.Length);
strm1.Position = 0;
myDoc.Load(strm1);
// all of your edits to the file here
strm1.Position = 0;
// save the file back to disk
using(var fs = new FileStream("FILEPATH",FileMode.Create,FileAccess.ReadWrite)){
myDoc.Save(fs);
}
}
To get the FILEPATH for a Sharepoint file, it'd be something along these lines (I don't have a Sharepoint development environment set up right now):
SPFile file = methods.web.GetFile("MyXMLFile.xml")
var filepath = file.ParentFolder.ServerRelativeUrl + "\\" + file.Name;
Or it might be easier to just use the SaveBinary method of the SPFile class like this:
// same code from above
// all of your edits to the file here
strm1.Position = 0;
// don't use a FileStream, just SaveBinary
file.SaveBinary(strm1);
I didn't test this code, but I've used it in Sharepoint solutions to modify XML (mainly OpenXML) documents in Sharepoint lists. Read this blogpost for more information
You could look into using the XDocument class instead of XmlDocument class.
http://msdn.microsoft.com/en-us/library/system.xml.linq.xdocument.aspx
I prefer it because of the simplicity and it eliminates having to use Memory Stream.
Edit: You can append to the file like this:
XDocument doc = XDocument.Load('filePath');
doc.Root.Add(
new XElement("An Element Name",
new XAttribute("An Attribute", "Some Value"),
new XElement("Nested Element", "Inner Text"))
);
doc.Save(filePath);
Or you can search for an element and update like this:
doc.Root.Elements("The element").First(m =>
m.Attribute("An Attribute").Value == "Some value to match").SetElementValue(
"The element to change", "Value to set element to");
doc.Save('filePath');
I am using the method
File.SaveBinaryDirect
in Microsoft.SharePoint.Client to insert new documents in a Sharepoint Library. just wondering what is the most effective way of getting the Guids of those new records.
Well, you've just saved the file to a particular URL - get the File by that URL, and then use the ListItemAllFields property to get the ListItem that would contain those IDs
From here:
var FileSrvRelUrl = "/sub/doclib/Folder/File.doc";
using (var fileStream = new MemoryStream(new byte[100]))
{
Microsoft.SharePoint.Client.File.SaveBinaryDirect(clientContext, FileSrvRelUrl, fileStream, false);
}
var web = clientContext.Web;
var f = web.GetFileByServerRelativeUrl(FileSrvRelUrl);
var item = f.ListItemAllFields;
item["SomeField"] = "Value";
item.Update();
clientContext.Load(item, i=>i.Id);
clientContext.ExecuteQuery();