Get Sub-Files with folders' IDs - Client Object Model - c#

I want to get some folders - sub files. I have all folders SharePoint ID in a list. My code is working but it's performance very bad because there is a lot of context.ExecuteQuery;
I want to make it maybe with a Caml Query.
using (var context = new ClientContext("http://xxx/haberlesme/"))
{
var web = context.Web;
var list = context.Web.Lists.GetById(new Guid(target));
int[] siraliIdArray;
//siraliIdArray = loadSharePointIDList(); think like this
for (var j = 0; j < siraliIdArray.Count; j++)
{
var folderName = listItemCollection[j]["Title"].ToString();//Folder Name
var currentFolder = web.GetFolderByServerRelativeUrl("/haberlesme/Notice/" + folderName);
var currentFiles = currentFolder.Files;
context.Load(currentFiles);
//I don't want to execute for n folder n times. I want to execute n folder 1 time.
context.ExecuteQuery();
var ek = new LDTOTebligEk();
//I don't want to add one - one
foreach (var file1 in currentFiles)
{
ek.DokumanPath = urlPrefix + folderName + "/" + file1.Name;
ek.DokumanAd = file1.Name;
ekler.Add(ek);
}
}
}
For example I have 100 folder but I want to get 10 folders sub folder in one Execution

Since CSOM API supports Request Batching:
The CSOM programming model is built around request batching. When you
work with the CSOM, you can perform a series of data operations on the
ClientContext object. These operations are submitted to the server in
a single request when you call the ClientContext.BeginExecuteQuery
method.
you could refactor your code as demonstrated below:
var folders = new Dictionary<string,Microsoft.SharePoint.Client.Folder>();
var folderNames = new[] {"Orders","Requests"};
foreach (var folderName in folderNames)
{
var folderKey = string.Format("/Shared Documents/{0}", folderName);
folders[folderKey] = context.Web.GetFolderByServerRelativeUrl(folderKey);
context.Load(folders[folderKey],f => f.Files);
}
context.ExecuteQuery(); //execute request only once
//print all files
var allFiles = folders.SelectMany(folder => folder.Value.Files);
foreach (var file in allFiles)
{
Console.WriteLine(file.Name);
}

Use Caml Query:
Microsoft.SharePoint.Client.List list = clientContext.Web.Lists.GetByTitle("Main Folder");
Microsoft.SharePoint.Client.CamlQuery caml = new Microsoft.SharePoint.Client.CamlQuery();
caml.ViewXml = #"<View><Query><Where><Eq><FieldRef Name='FileLeafRef'/><Value Type='Folder'>SubFolderName</Value></Eq></Where></Query></View>";
caml.FolderServerRelativeUrl = " This line should be added if the main folder is not in the site layer";
Microsoft.SharePoint.Client.ListItemCollection items = list.GetItems(caml);
clientContext.Load(items);
//Get your folder using items[0]

Related

How to send multiple images to cloudinary?

Follow code:
string[] files
= System.IO.Directory.GetFiles(#"C:\Users\Matheus Miranda\Pictures", "*.jpg");
foreach (var file in files)
{
var uploadParams = new ImageUploadParams()
{
File = new FileDescription(file),
PublicId = "my_folder/images",
EagerAsync = true
};
var uploadResult = cloudinary.Upload(uploadParams);
}
It is not working, it is always recording the previous file.
I'm trying to save multiple images to cloudinary and nothing is success.
Only one image is saved. I use a libray Cloudinary.
Any solution ?
When I tested it out, it works as expected; however, I would adjust a couple of things. The first thing is that you do not need the parameter eager_async as no eager transformation is being applied to the assets. An eager transformation lets you create a modified version of the original asynchronously after the asset has been uploaded. Secondly, if you wish to see the upload response, you can use property JsonObj and display it in the console. I have modified your sample here:
string[] files
= System.IO.Directory.GetFiles(#"C:\Users\Matheus Miranda\Pictures", "*.jpg");
foreach (var file in files)
{
var uploadParams = new ImageUploadParams()
{
File = new FileDescription(file),
UseFilename = true
};
var uploadResult = cloudinary.Upload(uploadParams);
Console.WriteLine(uploadResult.JsonObj);
}
I found the solution!
string[] files =
System.IO.Directory.GetFiles(#"C:\Users\Matheus Miranda\Pictures\teste", "*.jpg");
for (int i = 0; i < files.Length; i++)
{
var uploadParams = new ImageUploadParams()
{
File = new FileDescription(files[i]),
PublicId = $"my_folder/images/{System.IO.Path.GetFileName(files[i])}"
};
var uploadResult = cloudinary.Upload(uploadParams);
}

C# Sharepoint.Client - Return all files and folders from a given subfolder

I am trying to return all files and folders in a SharePoint library starting from a given subfolder.
If I set the FolderServerRelativeUrl on the CamlQuery to the folder i wish to start from, I can get all the list items for that given folder; however, when I try to add in camlQuery.ViewXML to recursively return items with any additional sub folders as well, I get the following exception:
Microsoft.SharePoint.Client.ServerException: 'The attempted operation is prohibited because it exceeds the list view threshold enforced by the administrator.'
Code
public static IEnumerable<string> GetSharepointFiles2(string sharePointsite, string libraryName, string username, string password, string subFolders)
{
Uri filename = new Uri(sharePointsite);
string server = filename.AbsoluteUri.Replace(filename.AbsolutePath, "");
List<string> fullfilePaths = new List<string>();
using (ClientContext cxt = new ClientContext(filename))
{
cxt.Credentials = GetCreds(username, password);
Web web = cxt.Web;
cxt.Load(web, wb => wb.ServerRelativeUrl);
cxt.ExecuteQuery();
List list = web.Lists.GetByTitle(libraryName);
cxt.Load(list);
cxt.ExecuteQuery();
Folder folder = web.GetFolderByServerRelativeUrl(web.ServerRelativeUrl + subFolders);
cxt.Load(folder);
cxt.ExecuteQuery();
CamlQuery camlQuery = new CamlQuery();
camlQuery.ViewXml = #"<View Scope='RecursiveAll'>
<Query>
</Query>
</View>";
camlQuery.FolderServerRelativeUrl = folder.ServerRelativeUrl;
ListItemCollection listItems = list.GetItems(camlQuery);
cxt.Load(listItems);
cxt.ExecuteQuery();
foreach (ListItem listItem in listItems)
{
if (listItem.FileSystemObjectType == FileSystemObjectType.File)
{
fullfilePaths.Add(String.Format("{0}{1}", server, listItem["FileRef"]));
}
else if (listItem.FileSystemObjectType == FileSystemObjectType.Folder)
{
Console.WriteLine(String.Format("{0}{1}", server, listItem["FileRef"]));
fullfilePaths.Add(String.Format("{0}{1}", server, listItem["FileRef"]));
}
}
}
return fullfilePaths;
}
private static SharePointOnlineCredentials GetCreds(string username, string password)
{
SecureString securePassword = new SecureString();
foreach (char c in password.ToCharArray()) securePassword.AppendChar(c);
return new SharePointOnlineCredentials(username, securePassword);
}
In terms of the threshold limit, I have tried this on a folder with only 1 file and 1 folder in (in turn that folder has 1 file only), so if the limit is default 5000, I have no idea why I'd be getting this.
Finally found a solution that works, even if it is a bit sledgehammer!
Although the folders I was looking to retrieve the items for had far less than 5000 items, the issue was the list as a whole did exceed this threshold (In this case it was about 11,000 items).
I removed the FolderServerRelativeURL attribute and then used ListItemCollectionPosition to paginate/batch up the all the items in the list. Once all the items are in a collection it can be filtered with Linq for the relevant subfolder. (CAML Query - Going around the 5000 List Item Threshold)
If anyone has a way of being more targeted with the items, I'd love to see it.
Code:
public static IEnumerable<ListItem> GetSharepointFiles2(string sharePointsite, string libraryName, string username, string password, string subFolders)
{
Uri filename = new Uri(sharePointsite);
List<ListItem> items = new List<ListItem>();
using (ClientContext cxt = new ClientContext(filename))
{
cxt.Credentials = GetCreds(username, password);
Web web = cxt.Web;
cxt.Load(web, wb => wb.ServerRelativeUrl);
cxt.ExecuteQuery();
List list = web.Lists.GetByTitle(libraryName);
cxt.Load(list);
cxt.ExecuteQuery();
CamlQuery camlQuery = new CamlQuery();
camlQuery.ViewXml = "<View Scope='Recursive'><RowLimit>5000</RowLimit></View>";
do
{
ListItemCollection listItems = list.GetItems(camlQuery);
cxt.Load(listItems);
cxt.ExecuteQuery();
items.AddRange(listItems);
camlQuery.ListItemCollectionPosition = listItems.ListItemCollectionPosition;
} while (camlQuery.ListItemCollectionPosition != null);
var filteritems = items.Where(tt => tt.FieldValues["FileRef"].ToString().StartsWith(web.ServerRelativeUrl + subFolders));
return filteritems;
}
}

How to read extended file properties / file metadata

So, i followed a tutorial to "upload" files to a local path using ASP.net core,
this is the code:
public IActionResult About(IList<IFormFile> files)
{
foreach (var file in files)
{
var filename = ContentDispositionHeaderValue
.Parse(file.ContentDisposition)
.FileName
.Trim('"');
filename = hostingEnv.WebRootPath + $#"\{filename}";
using (FileStream fs = System.IO.File.Create(filename))
{
file.CopyTo(fs);
fs.Flush();
}
}
return View();
}
I want to read the extended properties of a file (file metadata)like:
name,
author,
date posted,
etc
and to sort the files using this data, is there a way using Iformfile?
If you want to access more file metadata then the .NET framework provides ootb, I guess you need to use a third party library.
Otherwise you need to write your own COM wrapper to access those details.
See this link for a pure C# sample.
Here an example how to read the properties of a file:
Add Reference to Shell32.dll from the "Windows/System32" folder to
your project
List<string> arrHeaders = new List<string>();
List<Tuple<int, string, string>> attributes = new List<Tuple<int, string, string>>();
Shell32.Shell shell = new Shell32.Shell();
var strFileName = #"C:\Users\Admin\Google Drive\image.jpg";
Shell32.Folder objFolder = shell.NameSpace(System.IO.Path.GetDirectoryName(strFileName));
Shell32.FolderItem folderItem = objFolder.ParseName(System.IO.Path.GetFileName(strFileName));
for (int i = 0; i < short.MaxValue; i++)
{
string header = objFolder.GetDetailsOf(null, i);
if (String.IsNullOrEmpty(header))
break;
arrHeaders.Add(header);
}
// The attributes list below will contain a tuple with attribute index, name and value
// Once you know the index of the attribute you want to get,
// you can get it directly without looping, like this:
var Authors = objFolder.GetDetailsOf(folderItem, 20);
for (int i = 0; i < arrHeaders.Count; i++)
{
var attrName = arrHeaders[i];
var attrValue = objFolder.GetDetailsOf(folderItem, i);
var attrIdx = i;
attributes.Add(new Tuple<int, string, string>(attrIdx, attrName, attrValue));
Debug.WriteLine("{0}\t{1}: {2}", i, attrName, attrValue);
}
Console.ReadLine();
You can enrich this code to create custom classes and then do sorting depending on your needs.
There are many paid versions out there, but there is a free one called WindowsApiCodePack
For example accessing image metadata, I think it supports
ShellObject picture = ShellObject.FromParsingName(file);
var camera = picture.Properties.GetProperty(SystemProperties.System.Photo.CameraModel);
newItem.CameraModel = GetValue(camera, String.Empty, String.Empty);
var company = picture.Properties.GetProperty(SystemProperties.System.Photo.CameraManufacturer);
newItem.CameraMaker = GetValue(company, String.Empty, String.Empty);

Sharepoint - uploading files with column values can't load Taxonomy fields in target

I am trying to transfer files from one document library on one farm to another. Since both are accessible from same network, I thought of getting column values from source and directly add it into a item on destination. Also upload the file along with the item.
List list = ccSource.Web.Lists.GetByTitle("Source Library");
ccSource.ExecuteQuery(); // ccSource is initialized with source Site collection
CamlQuery query = new CamlQuery();
ListItemCollection itemCollection = list.GetItems(CamlQuery.CreateAllItemsQuery()); //Query can also be selective
ccSource.Load(list);
ccSource.Load(itemCollection);
ccSource.ExecuteQuery();
ccSource.Load(ccSource.Web);
string[] coloumnsList = System.IO.File.ReadAllLines(#"E:\Coloumns.txt");
foreach (ListItem item in itemCollection)
{
ClientContext ccTarget. = new ClientContext("http://TargetSC/sites/mysite");
ccTarget.Credentials = new NetworkCredential("username", "pass", "domain");
ccTarget.ExecuteQuery();
var targetList = ccTarget.Web.Lists.GetByTitle("TargetLibrary");
string diskFilePath = #"E:\downloadedFile.docx"; //I have skipped the download code
var fileInfo1 = new FileCreationInformation
{
Url = System.IO.Path.GetFileName(diskFilePath),
Content = System.IO.File.ReadAllBytes(diskFilePath),
Overwrite = true
};
var file = targetList.RootFolder.Files.Add(fileInfo1);
var itemTar = file.ListItemAllFields;
//update list values
foreach (var allFields in item.FieldValues)
{
if (allFields.Key == "FileLeafRef" || checkColoumn(coloumnsList,allFields.Key))
itemTar[allFields.Key] = allFields.Value;
}
itemTar.Update();
ccTarget.ExecuteQuery(); //Exception here
}
public static bool checkColoumn(string[] arr,string query)
{
bool flag = false;
for (int i = 0; i < arr.Length; i++)
{
if (arr[i].Equals(query))
{flag = true; break;}
}
return flag;
}
I am getting exception:
The given guid does not exist in the term storeon
ccTarget.exceuteQuery()

Find new file in two folders with a cross check

I am trying to sort two folders in to a patched folder, finding which file is new in the new folder and marking it as new, so i can transfer that file only. i dont care about dates or hash changes. just what file is in the new folder that is not in the old folder.
somehow the line
pf.NFile = !( oldPatch.FindAll(s => s.Equals(f)).Count() == 0);
is always returning false. is there something wrong with my logic of cross checking?
List<string> newPatch = DirectorySearch(_newFolder);
List<string> oldPatch = DirectorySearch(_oldFolder);
foreach (string f in newPatch)
{
string filename = Path.GetFileName(f);
string Dir = (Path.GetDirectoryName(f).Replace(_newFolder, "") + #"\");
PatchFile pf = new PatchFile();
pf.Dir = Dir;
pf.FName = filename;
pf.NFile = !( oldPatch.FindAll(s => s.Equals(f)).Count() == 0);
nPatch.Files.Add(pf);
}
foreach (string f in oldPatch)
{
string filename = Path.GetFileName(f);
string Dir = (Path.GetDirectoryName(f).Replace(_oldFolder, "") + #"\");
PatchFile pf = new PatchFile();
pf.Dir = Dir;
pf.FName = filename;
if (!nPatch.Files.Exists(item => item.Dir == pf.Dir &&
item.FName == pf.FName))
{
nPatch.removeFiles.Add(pf);
}
}
I don't have the classes you are using (like DirectorySearch and PatchFile), so i can't compile your code, but IMO the line _oldPatch.FindAll(... doesn't return anything because you are comparing the full path (c:\oldpatch\filea.txt is not c:\newpatch\filea.txt) and not the file name only. IMO your algorithm could be simplified, something like this pseudocode (using List.Contains instead of List.FindAll):
var _newFolder = "d:\\temp\\xml\\b";
var _oldFolder = "d:\\temp\\xml\\a";
List<FileInfo> missing = new List<FileInfo>();
List<FileInfo> nPatch = new List<FileInfo>();
List<FileInfo> newPatch = new DirectoryInfo(_newFolder).GetFiles().ToList();
List<FileInfo> oldPatch = new DirectoryInfo(_oldFolder).GetFiles().ToList();
// take all files in new patch
foreach (var f in newPatch)
{
nPatch.Add(f);
}
// search for hits in old patch
foreach (var f in oldPatch)
{
if (!nPatch.Select (p => p.Name.ToLower()).Contains(f.Name.ToLower()))
{
missing.Add(f);
}
}
// new files are in missing
One possible solution with less code would be to select the file names, put them into a list an use the predefined List.Except or if needed List.Intersect methods. This way a solution to which file is in A but not in B could be solved fast like this:
var locationA = "d:\\temp\\xml\\a";
var locationB = "d:\\temp\\xml\\b";
// takes file names from A and B and put them into lists
var filesInA = new DirectoryInfo(locationA).GetFiles().Select (n => n.Name).ToList();
var filesInB = new DirectoryInfo(locationB).GetFiles().Select (n => n.Name).ToList();
// Except retrieves all files that are in A but not in B
foreach (var file in filesInA.Except(filesInB).ToList())
{
Console.WriteLine(file);
}
I have 1.xml, 2.xml, 3.xml in A and 1.xml, 3.xml in B. The output is 2.xml - missing in B.

Categories

Resources