How to send multiple images to cloudinary? - c#

Follow code:
string[] files
= System.IO.Directory.GetFiles(#"C:\Users\Matheus Miranda\Pictures", "*.jpg");
foreach (var file in files)
{
var uploadParams = new ImageUploadParams()
{
File = new FileDescription(file),
PublicId = "my_folder/images",
EagerAsync = true
};
var uploadResult = cloudinary.Upload(uploadParams);
}
It is not working, it is always recording the previous file.
I'm trying to save multiple images to cloudinary and nothing is success.
Only one image is saved. I use a libray Cloudinary.
Any solution ?

When I tested it out, it works as expected; however, I would adjust a couple of things. The first thing is that you do not need the parameter eager_async as no eager transformation is being applied to the assets. An eager transformation lets you create a modified version of the original asynchronously after the asset has been uploaded. Secondly, if you wish to see the upload response, you can use property JsonObj and display it in the console. I have modified your sample here:
string[] files
= System.IO.Directory.GetFiles(#"C:\Users\Matheus Miranda\Pictures", "*.jpg");
foreach (var file in files)
{
var uploadParams = new ImageUploadParams()
{
File = new FileDescription(file),
UseFilename = true
};
var uploadResult = cloudinary.Upload(uploadParams);
Console.WriteLine(uploadResult.JsonObj);
}

I found the solution!
string[] files =
System.IO.Directory.GetFiles(#"C:\Users\Matheus Miranda\Pictures\teste", "*.jpg");
for (int i = 0; i < files.Length; i++)
{
var uploadParams = new ImageUploadParams()
{
File = new FileDescription(files[i]),
PublicId = $"my_folder/images/{System.IO.Path.GetFileName(files[i])}"
};
var uploadResult = cloudinary.Upload(uploadParams);
}

Related

Possible Leak when reading content?

I created a tool which iterates through all Commits of a repository.
Then it makes a diff of the commit to all of its parents and then reads the content for some checks.
Now it came out that this slows down very quickly. It always slows down after the same specific commit which is quite large as it is a merge commit.
Here the code how I iteratre through the commits. The following codes are a little simplified for a better focus.
var repo = new Repository(path);
foreach (LibGit2Sharp.Commit commit in repo.Commits)
{
IEnumerable<FileChanges> changed = repo.GetChangedToAllParents(commit);
var files = ResolveChangeFileInfos(changed);
var entry = new Commit(commit.Id.ToString(), commit.Author.Email, commit.Committer.When, commit.Message, files);
yield return entry;
}
in GetChagedToAllParents I basically make a diff foreach parent like this
foreach(var parent in commit.parent)
{
var options = new CompareOptions
{
Algorithm = DiffAlgorithm.Minimal,
IncludeUnmodified = false
};
var patches = repo.Diff.Compare<Patch>(parent.Tree, commit.Tree, options); // Difference
}
and later I read the content of the files in this way:
var blob = repo.Lookup<Blob>(patchEntry.Oid.Sha); // Find blob
Stream contentStream = blob.GetContentStream();
string result = null;
using (var tr = new StreamReader(contentStream, Encoding.UTF8))
{
result = tr.ReadToEnd();
}
Are there any known issues ? am I missing any leaks?
Update
I Found out that most of the time (about 90%) is taken by the diff. and it gets constantly slower
var options = new CompareOptions
{
Algorithm = DiffAlgorithm.Minimal,
IncludeUnmodified = false
};
var patches = repo.Diff.Compare<Patch>(parent.Tree, commit.Tree, options); // Difference
I can reproduce it with this code:
var repo = new Repository(path);
foreach (var commit in repo.Commits)
{
pos++;
if (pos%100 == 0)
{
Console.WriteLine(pos);
}
var options = new CompareOptions
{
Algorithm = DiffAlgorithm.Minimal,
IncludeUnmodified = false,
Similarity = new SimilarityOptions
{
RenameDetectionMode = RenameDetectionMode.None,
WhitespaceMode = WhitespaceMode.IgnoreAllWhitespace
}
};
foreach (var parent in commit.Parents)
{
var changedFiles=
repo.Diff.Compare<TreeChanges>(parent.Tree, commit.Tree, options).ToList();
}
}
it reserved about 500MB for each 1000 commits and at some point it just crashes. So I posted it also here:
https://github.com/libgit2/libgit2sharp/issues/1359
Is there a faster way to get all files that were changed in a specific commit?

How to read extended file properties / file metadata

So, i followed a tutorial to "upload" files to a local path using ASP.net core,
this is the code:
public IActionResult About(IList<IFormFile> files)
{
foreach (var file in files)
{
var filename = ContentDispositionHeaderValue
.Parse(file.ContentDisposition)
.FileName
.Trim('"');
filename = hostingEnv.WebRootPath + $#"\{filename}";
using (FileStream fs = System.IO.File.Create(filename))
{
file.CopyTo(fs);
fs.Flush();
}
}
return View();
}
I want to read the extended properties of a file (file metadata)like:
name,
author,
date posted,
etc
and to sort the files using this data, is there a way using Iformfile?
If you want to access more file metadata then the .NET framework provides ootb, I guess you need to use a third party library.
Otherwise you need to write your own COM wrapper to access those details.
See this link for a pure C# sample.
Here an example how to read the properties of a file:
Add Reference to Shell32.dll from the "Windows/System32" folder to
your project
List<string> arrHeaders = new List<string>();
List<Tuple<int, string, string>> attributes = new List<Tuple<int, string, string>>();
Shell32.Shell shell = new Shell32.Shell();
var strFileName = #"C:\Users\Admin\Google Drive\image.jpg";
Shell32.Folder objFolder = shell.NameSpace(System.IO.Path.GetDirectoryName(strFileName));
Shell32.FolderItem folderItem = objFolder.ParseName(System.IO.Path.GetFileName(strFileName));
for (int i = 0; i < short.MaxValue; i++)
{
string header = objFolder.GetDetailsOf(null, i);
if (String.IsNullOrEmpty(header))
break;
arrHeaders.Add(header);
}
// The attributes list below will contain a tuple with attribute index, name and value
// Once you know the index of the attribute you want to get,
// you can get it directly without looping, like this:
var Authors = objFolder.GetDetailsOf(folderItem, 20);
for (int i = 0; i < arrHeaders.Count; i++)
{
var attrName = arrHeaders[i];
var attrValue = objFolder.GetDetailsOf(folderItem, i);
var attrIdx = i;
attributes.Add(new Tuple<int, string, string>(attrIdx, attrName, attrValue));
Debug.WriteLine("{0}\t{1}: {2}", i, attrName, attrValue);
}
Console.ReadLine();
You can enrich this code to create custom classes and then do sorting depending on your needs.
There are many paid versions out there, but there is a free one called WindowsApiCodePack
For example accessing image metadata, I think it supports
ShellObject picture = ShellObject.FromParsingName(file);
var camera = picture.Properties.GetProperty(SystemProperties.System.Photo.CameraModel);
newItem.CameraModel = GetValue(camera, String.Empty, String.Empty);
var company = picture.Properties.GetProperty(SystemProperties.System.Photo.CameraManufacturer);
newItem.CameraMaker = GetValue(company, String.Empty, String.Empty);

Find new file in two folders with a cross check

I am trying to sort two folders in to a patched folder, finding which file is new in the new folder and marking it as new, so i can transfer that file only. i dont care about dates or hash changes. just what file is in the new folder that is not in the old folder.
somehow the line
pf.NFile = !( oldPatch.FindAll(s => s.Equals(f)).Count() == 0);
is always returning false. is there something wrong with my logic of cross checking?
List<string> newPatch = DirectorySearch(_newFolder);
List<string> oldPatch = DirectorySearch(_oldFolder);
foreach (string f in newPatch)
{
string filename = Path.GetFileName(f);
string Dir = (Path.GetDirectoryName(f).Replace(_newFolder, "") + #"\");
PatchFile pf = new PatchFile();
pf.Dir = Dir;
pf.FName = filename;
pf.NFile = !( oldPatch.FindAll(s => s.Equals(f)).Count() == 0);
nPatch.Files.Add(pf);
}
foreach (string f in oldPatch)
{
string filename = Path.GetFileName(f);
string Dir = (Path.GetDirectoryName(f).Replace(_oldFolder, "") + #"\");
PatchFile pf = new PatchFile();
pf.Dir = Dir;
pf.FName = filename;
if (!nPatch.Files.Exists(item => item.Dir == pf.Dir &&
item.FName == pf.FName))
{
nPatch.removeFiles.Add(pf);
}
}
I don't have the classes you are using (like DirectorySearch and PatchFile), so i can't compile your code, but IMO the line _oldPatch.FindAll(... doesn't return anything because you are comparing the full path (c:\oldpatch\filea.txt is not c:\newpatch\filea.txt) and not the file name only. IMO your algorithm could be simplified, something like this pseudocode (using List.Contains instead of List.FindAll):
var _newFolder = "d:\\temp\\xml\\b";
var _oldFolder = "d:\\temp\\xml\\a";
List<FileInfo> missing = new List<FileInfo>();
List<FileInfo> nPatch = new List<FileInfo>();
List<FileInfo> newPatch = new DirectoryInfo(_newFolder).GetFiles().ToList();
List<FileInfo> oldPatch = new DirectoryInfo(_oldFolder).GetFiles().ToList();
// take all files in new patch
foreach (var f in newPatch)
{
nPatch.Add(f);
}
// search for hits in old patch
foreach (var f in oldPatch)
{
if (!nPatch.Select (p => p.Name.ToLower()).Contains(f.Name.ToLower()))
{
missing.Add(f);
}
}
// new files are in missing
One possible solution with less code would be to select the file names, put them into a list an use the predefined List.Except or if needed List.Intersect methods. This way a solution to which file is in A but not in B could be solved fast like this:
var locationA = "d:\\temp\\xml\\a";
var locationB = "d:\\temp\\xml\\b";
// takes file names from A and B and put them into lists
var filesInA = new DirectoryInfo(locationA).GetFiles().Select (n => n.Name).ToList();
var filesInB = new DirectoryInfo(locationB).GetFiles().Select (n => n.Name).ToList();
// Except retrieves all files that are in A but not in B
foreach (var file in filesInA.Except(filesInB).ToList())
{
Console.WriteLine(file);
}
I have 1.xml, 2.xml, 3.xml in A and 1.xml, 3.xml in B. The output is 2.xml - missing in B.

Get Sub-Files with folders' IDs - Client Object Model

I want to get some folders - sub files. I have all folders SharePoint ID in a list. My code is working but it's performance very bad because there is a lot of context.ExecuteQuery;
I want to make it maybe with a Caml Query.
using (var context = new ClientContext("http://xxx/haberlesme/"))
{
var web = context.Web;
var list = context.Web.Lists.GetById(new Guid(target));
int[] siraliIdArray;
//siraliIdArray = loadSharePointIDList(); think like this
for (var j = 0; j < siraliIdArray.Count; j++)
{
var folderName = listItemCollection[j]["Title"].ToString();//Folder Name
var currentFolder = web.GetFolderByServerRelativeUrl("/haberlesme/Notice/" + folderName);
var currentFiles = currentFolder.Files;
context.Load(currentFiles);
//I don't want to execute for n folder n times. I want to execute n folder 1 time.
context.ExecuteQuery();
var ek = new LDTOTebligEk();
//I don't want to add one - one
foreach (var file1 in currentFiles)
{
ek.DokumanPath = urlPrefix + folderName + "/" + file1.Name;
ek.DokumanAd = file1.Name;
ekler.Add(ek);
}
}
}
For example I have 100 folder but I want to get 10 folders sub folder in one Execution
Since CSOM API supports Request Batching:
The CSOM programming model is built around request batching. When you
work with the CSOM, you can perform a series of data operations on the
ClientContext object. These operations are submitted to the server in
a single request when you call the ClientContext.BeginExecuteQuery
method.
you could refactor your code as demonstrated below:
var folders = new Dictionary<string,Microsoft.SharePoint.Client.Folder>();
var folderNames = new[] {"Orders","Requests"};
foreach (var folderName in folderNames)
{
var folderKey = string.Format("/Shared Documents/{0}", folderName);
folders[folderKey] = context.Web.GetFolderByServerRelativeUrl(folderKey);
context.Load(folders[folderKey],f => f.Files);
}
context.ExecuteQuery(); //execute request only once
//print all files
var allFiles = folders.SelectMany(folder => folder.Value.Files);
foreach (var file in allFiles)
{
Console.WriteLine(file.Name);
}
Use Caml Query:
Microsoft.SharePoint.Client.List list = clientContext.Web.Lists.GetByTitle("Main Folder");
Microsoft.SharePoint.Client.CamlQuery caml = new Microsoft.SharePoint.Client.CamlQuery();
caml.ViewXml = #"<View><Query><Where><Eq><FieldRef Name='FileLeafRef'/><Value Type='Folder'>SubFolderName</Value></Eq></Where></Query></View>";
caml.FolderServerRelativeUrl = " This line should be added if the main folder is not in the site layer";
Microsoft.SharePoint.Client.ListItemCollection items = list.GetItems(caml);
clientContext.Load(items);
//Get your folder using items[0]

Search text and copy another text from file. C#

I'm struggling to do this I'm trying, I'll explain..
First, Sorry for my english..
I have this:
int counter = 0;
string line;
System.IO.StreamReader file = new System.IO.StreamReader(#"myfile.txt");
WebClient testador = new WebClient();
while ((line = file.ReadLine()) != null)
{
string[] campos = line.Split(':');
counter++;
}
file.Close();
I need he get the word of campos[0], ex. campos[0] = apple
and search this word in other file .txt, ex: myfile2.txt, and copy the next 10 letters..
Again, sorry for my english.
EDIT¹:
First i get the text in myfile.txt, ex: Line 1 of MyFile.txt is: "apple:yeah:test" i use string[] campos = line.Split(':'), to separe, campos[0] = apple, now.. i need to search apple in myfile2.txt, and copy the next 10 letters from it.
Thanks guys. :D
According to my understanding of what you want to do I wrote this code in a way that you can follow step by step. Hopefully it can help you with what you want to do.
To start, just feed it your 2 text files and run it to see if it gives you the results that you want.
var file1 = #"C:\Users\User\Desktop\file1.txt";
var file2 = #"C:\Users\User\Desktop\file2.txt";
var file1Lines = File.ReadLines(file1);
var file2Text = File.ReadAllText(file2);
var file1WordList = new List<string>();
var file2WordListWithExtraTenLetters = new List<string>();
foreach (var l in file1Lines)
{
file1WordList.AddRange(l.Split(':'));
}
for (int i = 0; i < file1WordList.Count; i++)
{
if (file2Text.Contains(file1WordList[i]))
{
var indexOfWordFromFile1InFile2 = file2Text.IndexOf(file1WordList[i]);
file2WordListWithExtraTenLetters.Add(file2Text.Substring(indexOfWordFromFile1InFile2, 10));
}
}
// Test it
foreach (var element in file1WordList)
{
Console.WriteLine(element);
}
Console.WriteLine("=====================");
foreach (var element in file2WordListWithExtraTenLetters)
{
Console.WriteLine(element);
}

Categories

Resources