How I can make all files public using Amazon.ElasticTranscoder.Model (.NET, C#).
Here is my code:
public static void CreateJobRequest(string videoPath, string bucketName)
{
string accsessKey = CloudSettings.AccessKeyID;
string secretKey = CloudSettings.SecreteKey;
var etsClient = new AmazonElasticTranscoderClient(accsessKey,secretKey, RegionEndpoint.USEast1);
var notifications = new Notifications()
{
Completed = "arn:aws:sns:us-east-1:XXXXXXXXXXXX:Transcode",
Error = "arn:aws:sns:us-east-1:XXXXXXXXXXXX:Transcode",
Progressing = "arn:aws:sns:us-east-1:XXXXXXXXXXXX:Transcode",
Warning = "arn:aws:sns:us-east-1:XXXXXXXXXXXX:Transcode"
};
var pipeline=etsClient.CreatePipeline(new CreatePipelineRequest()
{
Name = "MyFolder",
InputBucket = bucketName,
OutputBucket = bucketName,
Notifications = notifications,
Role = "arn:aws:iam::XXXXXXXXXXXX:role/Elastic_Transcoder_Default_Role"
}).CreatePipelineResult.Pipeline;
etsClient.CreateJob(new CreateJobRequest()
{
PipelineId = pipeline.Id,
Input = new JobInput()
{
AspectRatio = "auto",
Container = "mp4",
FrameRate = "auto",
Interlaced = "auto",
Resolution = "auto",
Key = videoPath
},
Output = new CreateJobOutput()
{
ThumbnailPattern = videoPath+"videoName{resolution}_{count}",
Rotate = "0",
PresetId = "1351620000000-000020",
Key = videoPath+"newFileName.mp4"
}
});
}
Everything works perfect, but transcoded files are private. How I can set it to public?
I've just had this issue today and the way to resolve it is as follows:
In your Pipeline under "Configure Amazon S3 Bucket for Transcoded Files and Playlists"
Use the "+ Add a permission link".
Select "Grantee Type" as "Amazon S3 Group".
Select "Grantee" as "All Users".
Then check "Access" as "Open/Download" AND "View Permission"
Save changes.
You can repeat this for any thumbnails that are generated in the section directly beneath: "Configure Amazon S3 Bucket for Thumbnails".
Just to add to #timstermatic answer - I only had to grant 'Open/Download' access to make the objects public.
The 'View Permissions' option is used to allow anyone to read the ACL (Access Control List) of the object, not to view the object itself - that's taken care of by the 'Open/Download' option.
As usual with AWS terms, it's easy to misinterpret - it's not 'Permission to View the Object', it's 'View the Object's Permissions'.
(Sorry I couldn't add this as a comment, I don't have enough rep.)
Usually a conflict in policies can give rise to this situation. An easier way to handle this is to set the public read permission straight on the bucket in the "Bucket Policy" section of the Permissions tab. This bit of code might help.
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "AllowPublicRead",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::<BUCKET_NAME>/*"
}
]
}
You can specify the folder as well in the Resource parameter. Haven't tried extensions but I guess those should work too.
Related
I am trying to utilize Azure Cognitive services to perform basic document extraction.
My intent is to input PDFs and DOCXs (and possibly some other files) into the Cognitive Engine for parsing, but unfortunately, the implementation of this is not as simple as it seems.
According to the documentation (https://learn.microsoft.com/en-us/azure/search/cognitive-search-skill-document-extraction#sample-definition), I must define the skill and then I should be able to input files, but there is no examples on how this should be done.
So far I have been able to define the skill but I am still not sure where I should be dropping the files into.
Please see my code below, as it seeks to replicate the same data structure shown in the example code (albeit using the C# Library)
public static DocumentExtractionSkill CreateDocumentExtractionSkill()
{
List<InputFieldMappingEntry> inputMappings = new List<InputFieldMappingEntry>
{
new("file_data") {Source = "/document/file_data"}
};
List<OutputFieldMappingEntry> outputMappings = new List<OutputFieldMappingEntry>
{
new("content") {TargetName = "extracted_content"}
};
DocumentExtractionSkill des = new DocumentExtractionSkill(inputMappings, outputMappings)
{
Description = "Extract text (plain and structured) from image",
ParsingMode = BlobIndexerParsingMode.Text,
DataToExtract = BlobIndexerDataToExtract.ContentAndMetadata,
Context = "/document",
};
return des;
}
And then I build on this skill like so:
_indexerClient = new SearchIndexerClient(new Uri(Environment.GetEnvironmentVariable("SearchEndpoint")), new AzureKeyCredential(Environment.GetEnvironmentVariable("SearchKey"));
List<SearchIndexerSkill> skills = new List<SearchIndexerSkill> { Skills.DocExtractionSkill.CreateDocumentExtractionSkill() };
SearchIndexerSkillset skillset = new SearchIndexerSkillset("DocumentSkillset", skills)
{
Description = "Document Cracker Skillset",
CognitiveServicesAccount = new CognitiveServicesAccountKey(Environment.GetEnvironmentVariable("CognitiveServicesKey"))
};
await _indexerClient.CreateOrUpdateSkillsetAsync(skillset);
And... then what?
There is no clear method that would fit what I believe the next stage, actually parsing documents.
What is the next step from here to begin dumping files into the _indexerClient (of type SearchIndexerClient)?
As the next stage shown in the documentation is:
{
"values": [
{
"recordId": "1",
"data":
{
"file_data": {
"$type": "file",
"data": "aGVsbG8="
}
}
}
]
}
Which is not clear as to where I would be doing this.
According to the document that you have mentioned. They are actually trying to get the output through postman. They are using a GET Method to receive the extracted document content by sending JSON request to the mentioned URL(i.e. Cognitive skill url) and the files/documents are needed to be uploaded to your storage account in order to get extracted.
you can follow this tutorial to get more insights.
There are No examples to work with Microsoft.Graph in .NET core C# the API is all JSON.
I was able to create a choice site column but the default value did not work
Microsoft.Graph.ColumnDefinition column = new Microsoft.Graph.ColumnDefinition
{
ColumnGroup = "ECGmc",
DisplayName = "Document Stage",
Name = "DocumentStage",
Choice = new ChoiceColumn { ODataType= "microsoft.graph.choiceColumn", AllowTextEntry = false,
Choices = new List<string>() { "Working Draft", "Discussion Draft", "Final" }, DisplayAs = "dropDownMenu" },
DefaultValue = new DefaultColumnValue { Value = "Working Draft", ODataType= "microsoft.graph.defaultColumnValue" },
Description = "Will differ the stages the Document changes",
Required = true
};
Microsoft.Graph.ColumnDefinition newColumn = await graphClient.Groups[project.GroupID].Sites["root"].Columns.Request().AddAsync(column);
Works but the DefaultValue is empty.
Does anyone know how to set the DefaultValue?
Does anyone know where I can find C# examples for Microsoft.Graph?
I test it in my local environment but I got the "DefaultValue" successfully. I didn't add "ODataType". Below I post the screenshot of my code and "Console.WriteLine" result:
Here the "DefaultValue" is not empty. Hope it would be helpful to you.
I know this is an old issue, but at this moment I am having the same issue for a ColumnDefinition.
I can set all settings for the Column except DefaultValue as described in the initial post by Ofer.
As I can see
I have tried in:
C# project (as in the post above when creating the Column)
Graph Explorer with a PATCH for the created column:
https://learn.microsoft.com/en-us/graph/api/columndefinition-update?view=graph-rest-1.0
Using both v1.0 and Beta, but none of the versions could update DefaultValue:
{
"defaultValue": {
"value": "cba"
}
}
I also traced the Graph API traffic using Fiddler and I can see that the DefaultValue is included in the calls, but ignored.
Could others confirm the error or provide a working solution for updating the DefaultValue.
EDIT: After investigating, then I can attach the DefaultValue if I set it when creating the List - however it does not solve the issue regarding Adding/Editing a List Column.
Best Regards,
Martin
I am trying to use the Autodesk forge API for C# to get a list of Hubs.
This is what I have done so far:
HubsApi api = new HubsApi();
Hubs hubs = api.GetHubs();
Pretty simple. But when I do so, I get an exception wicht complains, that one cannot convert from DynamicJsonResponse to Hubs. I think, this is because I receive two warnings in my response string and so it is not a Hub Object any longer. The warning looks like this:
"warnings":[
{
"Id":null,
"HttpStatusCode":"403",
"ErrorCode":"BIM360DM_ERROR",
"Title":"Unable to get hubs from BIM360DM EMEA.",
"Detail":"You don't have permission to access this API",
"AboutLink":null,
"Source":[
],
"meta":[
]
}
All of this is wrapped in an Dictionary with four entries and only two of them are data. However, according to Autodesk, this warning can be ignored.
So after that I tried to convert it in a Dictionary and only select the data entry
HubsApi api = new HubsApi();
DynamicJsonResponse resp = api.GetHubs();
DynamicDictionary hubs = (DynamicDictionary)resp.Dictionary["data"];
Then I looped through it:
for(int i = 0; i < hubs.Dictionary.Count && bim360hub == null; i++)
{
string hub = hubs.Dictionary.ElementAt(i).ToString();
[....]
}
But the string hub isn't an json-hub either. It is an array which looks like this:
[
0,
{
"type": "hubs",
"id": "****",
"attributes": {...},
"links": {...},
"relationships": {...},
}
]
And the second element in the array is my hub. I know, how I can select the second element. But there must be a much easier to get the list of hubs.
In an example in the references it seemd to work with these simple two lines of code:
HubsApi api = new HubsApi();
Hubs hubs = api.GetHubs();
Any ideas, how I manage to get my hubs?
First, consider using the Async version of those methods, avoid using non-async calls as it causes your desktop app to freeze (while is connecting) or allocates more resources on ASP.NET.
The following function is part of this sample, which lists all hubs, projects and files under a user account. It's a good place to start. Note it's organizing the Hubs in a TreeNode list, which is compatible with jsTree.
private async Task<IList<TreeNode>> GetHubsAsync()
{
IList<TreeNode> nodes = new List<TreeNode>();
HubsApi hubsApi = new HubsApi();
hubsApi.Configuration.AccessToken = AccessToken;
var hubs = await hubsApi.GetHubsAsync();
string urn = string.Empty;
foreach (KeyValuePair<string, dynamic> hubInfo in new DynamicDictionaryItems(hubs.data))
{
string nodeType = "hubs";
switch ((string)hubInfo.Value.attributes.extension.type)
{
case "hubs:autodesk.core:Hub":
nodeType = "hubs";
break;
case "hubs:autodesk.a360:PersonalHub":
nodeType = "personalhub";
break;
case "hubs:autodesk.bim360:Account":
nodeType = "bim360hubs";
break;
}
TreeNode hubNode = new TreeNode(hubInfo.Value.links.self.href, (nodeType == "bim360hubs" ? "BIM 360 Projects" : hubInfo.Value.attributes.name), nodeType, true);
nodes.Add(hubNode);
}
return nodes;
}
When trying to upload large files (100-500MB) I get the familiar "OutOfMemoryException" which is caused by trying to read the whole file into memory at once (asked and answered here on stackoverflow). I know that I should use a stream or divide the file into smaller parts. Changing the proxy code manually is an option, if that helps. I use the specific webservice (CMWebService).
Since I am unable to change IBMs code, is there any way to send the file in smaller parts? I have already found the classes UpdateItemRequestAdd and UpdateItemRequestAddPart but I can't get them to work. Unfortunately, there are also no samples available by IBM.
Receiving files pose the same problem, and I have not been able to find any classes that could help me there.
This is the code that I am currently using to upload files:
string resources0 = "tiffFileContent";
string resources1 = "image/tiff";
string resources2 = #"D:\myImageFile.tif";
CreateItemRequest createRequest = new CreateItemRequest()
{
AuthenticationData = data,
Item = new CreateItemRequestItem()
{
ItemXML = new ItemXML()
{
MYITEMTYPE = new MYITEMTYPE()
{
ArchiveId = "4719",
ICMBASE = new ICMBASE[] {
new ICMBASE(){
resourceObject = new LobObjectType()
{
label = new LobObjectTypeLabel()
{
name= resources0
},
MIMEType = resources1,
originalFileName = resources2
},
}
}
}
},
},
mtomRef = new MTOMAttachment[] { new MTOMAttachment() {
ID = resources0,
MimeType = resources1,
Value = System.IO.File.ReadAllBytes(resources2), // Error on large files
}},
};
var createReply = service.CreateItem(createRequest);
We "resolved" this by telling our customer to get a more potent system with more RAM. With 4-8GB of RAM we were able to upload files up to 200MB without problem.
Upon receiving large files, the Java-HeapSize in IBM Content Manager had to be increased.
http://www.mkyong.com/websphere/how-to-increase-websphere-jvm-memory/
I am attempting to load document files into a document library in SharePoint using the CopyIntoItems method of the SharePoint Copy web service.
The code below executes and returns 0 (success). Also, the CopyResult[] array returns 1 value with a "Success" result. However, I cannot find the document anywhere in the library.
I have two questions:
Can anyone see anything wrong with my code or suggest changes?
Can anyone suggest how I could debug this on the server side. I don't have a tremendous amount of experience with SharePoint. If I can track what is going on through logging or some other method on the server side it may help me figure out what is going on.
Code Sample:
string[] destinationUrls = { Uri.EscapeDataString("https://someaddress.com/Reports/Temp") };
SPCopyWebService.FieldInformation i1 = new SPCopyWebService.FieldInformation { DisplayName = "Name", InternalName = "Name", Type = SPListTransferSpike1.SPCopyWebService.FieldType.Text, Value = "Test1Name" };
SPCopyWebService.FieldInformation i2 = new SPCopyWebService.FieldInformation { DisplayName = "Title", InternalName = "Title", Type = SPListTransferSpike1.SPCopyWebService.FieldType.Text, Value = "Test1Title" };
SPCopyWebService.FieldInformation[] info = { i1, i2 };
SPCopyWebService.CopyResult[] result;
byte[] data = File.ReadAllBytes("C:\\SomePath\\Test1Data.txt");
uint ret = SPCopyNew.CopyIntoItems("", destinationUrls, info, data, out result);
Edit that got things working:
I got my code working by adding "http://null" to the SourceUrl field. Nat's answer below would probably work for that reason. Here is the line I changed to get it working.
// Change
uint ret = SPCopyNew.CopyIntoItems("http://null", destinationUrls, info, data, out result);
I think the issue may be in trying to set the "Name" property using the webservice. I have had some fail doing that.
Given the "Name" is the name of the document, you may have some success with
string targetDocName = "Test1Name.txt";
string destinationUrl = Uri.EscapeDataString("https://someaddress.com/Reports/Temp/" + targetDocName);
string[] destinationUrls = { destinationUrl };
SPCopyWebService.FieldInformation i1 = new SPCopyWebService.FieldInformation { DisplayName = "Title", InternalName = "Title", Type = SPListTransferSpike1.SPCopyWebService.FieldType.Text, Value = "Test1Title" };
SPCopyWebService.FieldInformation[] info = { i1};
SPCopyWebService.CopyResult[] result;
byte[] data = File.ReadAllBytes("C:\\SomePath\\Test1Data.txt");
uint ret = SPCopyNew.CopyIntoItems(destinationUrl, destinationUrls, info, data, out result);
Note: I have used the "target" as the "source" property. Don't quite know why, but it does the trick.
I didn't understand very well what you're tying to do, but if you're trying to upload a file from a local directory into a sharepoint library, i would suggest you create a webclient and use uploadata:
Example (VB.NET):
dim webclient as Webclient
webClient.UploadData("http://srvasddress/library/filenameexample.doc", "PUT", filebytes)
Then you just have to check in the file using the lists web service, something like:
listService.CheckInFile("http://srvasddress/library/filenameexample.doc", "description", "1")
Hope it was of some help.
EDIT: Don't forget to set credentials for the web client, etc.
EDIT 2: Update metada fields using this:
listService.UpdateListItems("Name of the Library, batchquery)
You can find info on building batch query's in here: link
The sourceurl is used in Sharepoint. It is a link back to the "Source Document." When in your document library, hover over the item, to the right appears a down pointing triangle. Clicking on it, brings up a menu. Click on the "View Properties" Option. On this page you will see the following "This item is a copy of http://null ( Go To Source Item | Unlink )"
Because we are using the Copy function Sharepoint is keeping track of the "Source item" as part of the Document Management feature.