Receiving list of hubs with Autodesk Forge API - c#

I am trying to use the Autodesk forge API for C# to get a list of Hubs.
This is what I have done so far:
HubsApi api = new HubsApi();
Hubs hubs = api.GetHubs();
Pretty simple. But when I do so, I get an exception wicht complains, that one cannot convert from DynamicJsonResponse to Hubs. I think, this is because I receive two warnings in my response string and so it is not a Hub Object any longer. The warning looks like this:
"warnings":[
{
"Id":null,
"HttpStatusCode":"403",
"ErrorCode":"BIM360DM_ERROR",
"Title":"Unable to get hubs from BIM360DM EMEA.",
"Detail":"You don't have permission to access this API",
"AboutLink":null,
"Source":[
],
"meta":[
]
}
All of this is wrapped in an Dictionary with four entries and only two of them are data. However, according to Autodesk, this warning can be ignored.
So after that I tried to convert it in a Dictionary and only select the data entry
HubsApi api = new HubsApi();
DynamicJsonResponse resp = api.GetHubs();
DynamicDictionary hubs = (DynamicDictionary)resp.Dictionary["data"];
Then I looped through it:
for(int i = 0; i < hubs.Dictionary.Count && bim360hub == null; i++)
{
string hub = hubs.Dictionary.ElementAt(i).ToString();
[....]
}
But the string hub isn't an json-hub either. It is an array which looks like this:
[
0,
{
"type": "hubs",
"id": "****",
"attributes": {...},
"links": {...},
"relationships": {...},
}
]
And the second element in the array is my hub. I know, how I can select the second element. But there must be a much easier to get the list of hubs.
In an example in the references it seemd to work with these simple two lines of code:
HubsApi api = new HubsApi();
Hubs hubs = api.GetHubs();
Any ideas, how I manage to get my hubs?

First, consider using the Async version of those methods, avoid using non-async calls as it causes your desktop app to freeze (while is connecting) or allocates more resources on ASP.NET.
The following function is part of this sample, which lists all hubs, projects and files under a user account. It's a good place to start. Note it's organizing the Hubs in a TreeNode list, which is compatible with jsTree.
private async Task<IList<TreeNode>> GetHubsAsync()
{
IList<TreeNode> nodes = new List<TreeNode>();
HubsApi hubsApi = new HubsApi();
hubsApi.Configuration.AccessToken = AccessToken;
var hubs = await hubsApi.GetHubsAsync();
string urn = string.Empty;
foreach (KeyValuePair<string, dynamic> hubInfo in new DynamicDictionaryItems(hubs.data))
{
string nodeType = "hubs";
switch ((string)hubInfo.Value.attributes.extension.type)
{
case "hubs:autodesk.core:Hub":
nodeType = "hubs";
break;
case "hubs:autodesk.a360:PersonalHub":
nodeType = "personalhub";
break;
case "hubs:autodesk.bim360:Account":
nodeType = "bim360hubs";
break;
}
TreeNode hubNode = new TreeNode(hubInfo.Value.links.self.href, (nodeType == "bim360hubs" ? "BIM 360 Projects" : hubInfo.Value.attributes.name), nodeType, true);
nodes.Add(hubNode);
}
return nodes;
}

Related

Usage of Document Extraction cognitive skill

I am trying to utilize Azure Cognitive services to perform basic document extraction.
My intent is to input PDFs and DOCXs (and possibly some other files) into the Cognitive Engine for parsing, but unfortunately, the implementation of this is not as simple as it seems.
According to the documentation (https://learn.microsoft.com/en-us/azure/search/cognitive-search-skill-document-extraction#sample-definition), I must define the skill and then I should be able to input files, but there is no examples on how this should be done.
So far I have been able to define the skill but I am still not sure where I should be dropping the files into.
Please see my code below, as it seeks to replicate the same data structure shown in the example code (albeit using the C# Library)
public static DocumentExtractionSkill CreateDocumentExtractionSkill()
{
List<InputFieldMappingEntry> inputMappings = new List<InputFieldMappingEntry>
{
new("file_data") {Source = "/document/file_data"}
};
List<OutputFieldMappingEntry> outputMappings = new List<OutputFieldMappingEntry>
{
new("content") {TargetName = "extracted_content"}
};
DocumentExtractionSkill des = new DocumentExtractionSkill(inputMappings, outputMappings)
{
Description = "Extract text (plain and structured) from image",
ParsingMode = BlobIndexerParsingMode.Text,
DataToExtract = BlobIndexerDataToExtract.ContentAndMetadata,
Context = "/document",
};
return des;
}
And then I build on this skill like so:
_indexerClient = new SearchIndexerClient(new Uri(Environment.GetEnvironmentVariable("SearchEndpoint")), new AzureKeyCredential(Environment.GetEnvironmentVariable("SearchKey"));
List<SearchIndexerSkill> skills = new List<SearchIndexerSkill> { Skills.DocExtractionSkill.CreateDocumentExtractionSkill() };
SearchIndexerSkillset skillset = new SearchIndexerSkillset("DocumentSkillset", skills)
{
Description = "Document Cracker Skillset",
CognitiveServicesAccount = new CognitiveServicesAccountKey(Environment.GetEnvironmentVariable("CognitiveServicesKey"))
};
await _indexerClient.CreateOrUpdateSkillsetAsync(skillset);
And... then what?
There is no clear method that would fit what I believe the next stage, actually parsing documents.
What is the next step from here to begin dumping files into the _indexerClient (of type SearchIndexerClient)?
As the next stage shown in the documentation is:
{
"values": [
{
"recordId": "1",
"data":
{
"file_data": {
"$type": "file",
"data": "aGVsbG8="
}
}
}
]
}
Which is not clear as to where I would be doing this.
According to the document that you have mentioned. They are actually trying to get the output through postman. They are using a GET Method to receive the extracted document content by sending JSON request to the mentioned URL(i.e. Cognitive skill url) and the files/documents are needed to be uploaded to your storage account in order to get extracted.
you can follow this tutorial to get more insights.

How to set MongoDB Change Stream 'OperationType' in the C# driver?

When running the new MongDB Server, version 3.6, and trying to add a Change Stream watch to a collection to get notifications of new inserts and updates of documents, I only receive notifications for updates, not for inserts.
This is the default way I have tried to add the watch:
IMongoDatabase mongoDatabase = mongoClient.GetDatabase("Sandbox");
IMongoCollection<BsonDocument> collection = mongoDatabase.GetCollection<BsonDocument>("TestCollection");
var changeStream = collection.Watch().ToEnumerable().GetEnumerator();
changeStream.MoveNext();
var next = changeStream.Current;
Then I downloaded the C# source code from MongoDB to see how they did this. Looking at their test code for change stream watches, they create a new document(Insert) and then change that document right away(Update) and THEN set up the Change Stream watch to receive an 'update' notification.
No example is given on how to watch for 'insert' notifications.
I have looked at the Java and NodeJS examples, both on MongoDB website and SO, which seems to be straight forward and defines a way to see both Inserts and Updates:
var changeStream = collection.watch({ '$match': { $or: [ { 'operationType': 'insert' }, { 'operationType': 'update' } ] } });
The API for the C# driver is vastly different, I would have assumed they would have kept the same API for C# as Java and NodeJS. I found no or very few examples for C# to do the same thing.
The closest I have come is with the following attempt but still fails and the documentation for the C# version is very limited (or I have not found the right location). Setup is as follows:
String json = "{ '$match': { 'operationType': { '$in': ['insert', 'update'] } } }";
var options = new ChangeStreamOptions { FullDocument = ChangeStreamFullDocumentOption.UpdateLookup };
PipelineDefinition<ChangeStreamDocument<BsonDocument>, ChangeStreamDocument<BsonDocument>> pipeline = new EmptyPipelineDefinition<ChangeStreamDocument<BsonDocument>>().Match(Builders<ChangeStreamDocument<BsonDocument>>.Filter.Text(json,"json"));
Then running the statement below throws an Exception:
{"Command aggregate failed: $match with $text is only allowed as the
first pipeline stage."}
No other Filter options has worked either, and I have not found a way to just enter the JSON as a string to set the 'operationType'.
var changeStream = collection.Watch(pipeline, options).ToEnumerable().GetEnumerator();
changeStream.MoveNext();
var next = changeStream.Current;
My only goal here is to be able to set the 'operationType' using the C# driver. Does anyone know what I am doing wrong or have tried this using the C# driver and had success?
After reading though a large number of webpages, with very little info on the C# version of the MongoDB driver, I am very stuck!
Any help would be much appreciated.
Here is a sample of code I've used to update the collection Watch to retrieve "events" other than just document updates.
IMongoDatabase sandboxDB = mongoClient.GetDatabase("Sandbox");
IMongoCollection<BsonDocument> collection = sandboxDB.GetCollection<BsonDocument>("TestCollection");
//Get the whole document instead of just the changed portion
ChangeStreamOptions options = new ChangeStreamOptions() { FullDocument = ChangeStreamFullDocumentOption.UpdateLookup };
//The operationType can be one of the following: insert, update, replace, delete, invalidate
var pipeline = new EmptyPipelineDefinition<ChangeStreamDocument<BsonDocument>>().Match("{ operationType: { $in: [ 'replace', 'insert', 'update' ] } }");
var changeStream = collection.Watch(pipeline, options).ToEnumerable().GetEnumerator();
changeStream.MoveNext(); //Blocks until a document is replaced, inserted or updated in the TestCollection
ChangeStreamDocument<BsonDocument> next = changeStream.Current;
enumerator.Dispose();
The EmptyPiplineDefinition...Match() argument could also be:
"{ $or: [ {operationType: 'replace' }, { operationType: 'insert' }, { operationType: 'update' } ] }"
If you wanted to use the $or command, or
"{ operationType: /^[^d]/ }"
to throw a little regex in there. This last one is saying, I want all operationTypes unless they start with the letter 'd'.

Parsing InfluxDB result using JSON.net

I'm trying to make a command line utility for initializing a InfluxDB database, but I'm pretty new to influx, and C# in general.
With the following response from the Influx DB Database, I'm trying to pretty print this in the console window.
Ideally I would have errors show up in the standard error buffer, and warnings or info's show up in the standard output.
However, when running the code below in a debug environment, messages appears to be in an incorrect format according to several jsonpath checkers that I have used.
JSON input as result.Body
{
"results": [
{
"statement_id": 0,
"messages": [
{
"level": "warning",
"text": "deprecated use of 'CREATE RETENTION POLICY Primary ON SensorData DURATION 30d REPLICATION 1' in a read only context, please use a POST request instead"
}
]
}
]
}
JSON output as messages prior to transformation:
messages {{
"level": "warning",
"text": "deprecated use of 'CREATE RETENTION POLICY Primary ON SensorData DURATION 30d REPLICATION 1' in a read only context, please use a POST request instead"
}} Newtonsoft.Json.Linq.JToken {Newtonsoft.Json.Linq.JObject}
As you can see, the messages output is in a nested object {{}} rather then an array as expected...
According to https://jsonpath.curiousconcept.com/ and several other jsonpath checkers, I was expecting something similar to:
[
{
"level":"warning",
"text":"deprecated use of 'CREATE RETENTION POLICY Primary ON SensorData DURATION 30d REPLICATION 1' in a read only context, please use a POST request instead"
}
]
C#
private static void PrintResult(IInfluxDataApiResponse result)
{
var output = result.Success ? System.Console.Out : System.Console.Error;
output.WriteLine("["+result.StatusCode + "] : "+result.Body);
var json = JObject.Parse(result.Body);
var messages = json.SelectToken("$.results[*].messages[*]"); //outputs an array of messages if exists. e.g. [{level:warning,text:test}]
if (messages != null)
{
var transformed = messages.Select(m => new { level = (string)m["level"], text = (string)m["text]"] }).ToList();
foreach (var msg in transformed)
{
output.WriteLine($"[{result.StatusCode}] : {msg.level} - {msg.text}");
}
}
}
For my uses at least, using var messages =
json.SelectTokens("$.results[*].messages[*]");
rather then
json.SelectToken("$.results[*].messages[*]");
allowed me to workaround the issue, as I could then treat the result as a C# enumerable, as opposed to special casing 1 result vs many results for SelectToken as it seems to flatten single results into an object, where as other implementations would have it be an array.

Selenium track network data transferred

Using the developer tools and network tab in chrome you are able to view track the quantity of data transfered.
I'm trying to find a way to do this programmatically using selenium.
I have tried adding up the transferSize in the Peformance.Resource entries (as described here: how to access Network panel on google chrome developer toools with selenium?.)
performance.clearResourceTimings();
//Do Work
window.performance.getEntriesByType('resource');
However this value is not accurate
Is this possible?
My current (inaccurate) code that totals the sizeTransfer figures
IWebDriver driver;
/*Configure WebDriver*/
IJavaScriptExecutor m_jse;
m_jse = (IJavaScriptExecutor)m_driver;
object Ents = m_jse.ExecuteScript("return window.performance.getEntriesByType('resource');");
Type t = Ents.GetType();
MethodInfo mGetCount = Ents.GetType().GetMethod("get_Count");
int count = (int)mGetCount.Invoke(Ents, null);
for(int i = 0; i < count; i++)
{
MethodInfo mi = Ents.GetType().GetMethod("get_Item");
Dictionary<string, object> dict = (Dictionary<string, object>)mi.Invoke(Ents, new object[] { i });
object Value;
dict.TryGetValue("transferSize", out Value);
string sSize = Value.ToString();
size += Convert.ToDouble(sSize);
}
return size;
You are probably seeing inaccurate window.performance transferSize values because of CORS:
When CORS is in effect, many of the timing properties' values are returned as zero unless the server's access policy permits these values to be shared. This requires the server providing the resource to send the Timing-Allow-Origin HTTP response header with a value specifying the origin or origins which are allowed to get the restricted timestamp values.
Reference: https://developer.mozilla.org/en-US/docs/Web/API/Resource_Timing_API/Using_the_Resource_Timing_API#Coping_with_CORS
HAR Export Trigger by Firebug might be a tool you'd want to consider. This Firefox-only extension will export all network/performance data into a .har file, which you can similarly parse:
...
"content": {
"size": 33,
"compression": 0,
"mimeType": "text/html; charset=utf-8",
"text": "PGh0bWw+PGhlYWQ+PC9oZWFkPjxib2R5Lz48L2h0bWw+XG4=",
"encoding": "base64",
"comment": ""
}
...

How to set permissions to public, Elastic Transcoder Amazon SDK

How I can make all files public using Amazon.ElasticTranscoder.Model (.NET, C#).
Here is my code:
public static void CreateJobRequest(string videoPath, string bucketName)
{
string accsessKey = CloudSettings.AccessKeyID;
string secretKey = CloudSettings.SecreteKey;
var etsClient = new AmazonElasticTranscoderClient(accsessKey,secretKey, RegionEndpoint.USEast1);
var notifications = new Notifications()
{
Completed = "arn:aws:sns:us-east-1:XXXXXXXXXXXX:Transcode",
Error = "arn:aws:sns:us-east-1:XXXXXXXXXXXX:Transcode",
Progressing = "arn:aws:sns:us-east-1:XXXXXXXXXXXX:Transcode",
Warning = "arn:aws:sns:us-east-1:XXXXXXXXXXXX:Transcode"
};
var pipeline=etsClient.CreatePipeline(new CreatePipelineRequest()
{
Name = "MyFolder",
InputBucket = bucketName,
OutputBucket = bucketName,
Notifications = notifications,
Role = "arn:aws:iam::XXXXXXXXXXXX:role/Elastic_Transcoder_Default_Role"
}).CreatePipelineResult.Pipeline;
etsClient.CreateJob(new CreateJobRequest()
{
PipelineId = pipeline.Id,
Input = new JobInput()
{
AspectRatio = "auto",
Container = "mp4",
FrameRate = "auto",
Interlaced = "auto",
Resolution = "auto",
Key = videoPath
},
Output = new CreateJobOutput()
{
ThumbnailPattern = videoPath+"videoName{resolution}_{count}",
Rotate = "0",
PresetId = "1351620000000-000020",
Key = videoPath+"newFileName.mp4"
}
});
}
Everything works perfect, but transcoded files are private. How I can set it to public?
I've just had this issue today and the way to resolve it is as follows:
In your Pipeline under "Configure Amazon S3 Bucket for Transcoded Files and Playlists"
Use the "+ Add a permission link".
Select "Grantee Type" as "Amazon S3 Group".
Select "Grantee" as "All Users".
Then check "Access" as "Open/Download" AND "View Permission"
Save changes.
You can repeat this for any thumbnails that are generated in the section directly beneath: "Configure Amazon S3 Bucket for Thumbnails".
Just to add to #timstermatic answer - I only had to grant 'Open/Download' access to make the objects public.
The 'View Permissions' option is used to allow anyone to read the ACL (Access Control List) of the object, not to view the object itself - that's taken care of by the 'Open/Download' option.
As usual with AWS terms, it's easy to misinterpret - it's not 'Permission to View the Object', it's 'View the Object's Permissions'.
(Sorry I couldn't add this as a comment, I don't have enough rep.)
Usually a conflict in policies can give rise to this situation. An easier way to handle this is to set the public read permission straight on the bucket in the "Bucket Policy" section of the Permissions tab. This bit of code might help.
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "AllowPublicRead",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::<BUCKET_NAME>/*"
}
]
}
You can specify the folder as well in the Resource parameter. Haven't tried extensions but I guess those should work too.

Categories

Resources