Selenium track network data transferred - c#

Using the developer tools and network tab in chrome you are able to view track the quantity of data transfered.
I'm trying to find a way to do this programmatically using selenium.
I have tried adding up the transferSize in the Peformance.Resource entries (as described here: how to access Network panel on google chrome developer toools with selenium?.)
performance.clearResourceTimings();
//Do Work
window.performance.getEntriesByType('resource');
However this value is not accurate
Is this possible?
My current (inaccurate) code that totals the sizeTransfer figures
IWebDriver driver;
/*Configure WebDriver*/
IJavaScriptExecutor m_jse;
m_jse = (IJavaScriptExecutor)m_driver;
object Ents = m_jse.ExecuteScript("return window.performance.getEntriesByType('resource');");
Type t = Ents.GetType();
MethodInfo mGetCount = Ents.GetType().GetMethod("get_Count");
int count = (int)mGetCount.Invoke(Ents, null);
for(int i = 0; i < count; i++)
{
MethodInfo mi = Ents.GetType().GetMethod("get_Item");
Dictionary<string, object> dict = (Dictionary<string, object>)mi.Invoke(Ents, new object[] { i });
object Value;
dict.TryGetValue("transferSize", out Value);
string sSize = Value.ToString();
size += Convert.ToDouble(sSize);
}
return size;

You are probably seeing inaccurate window.performance transferSize values because of CORS:
When CORS is in effect, many of the timing properties' values are returned as zero unless the server's access policy permits these values to be shared. This requires the server providing the resource to send the Timing-Allow-Origin HTTP response header with a value specifying the origin or origins which are allowed to get the restricted timestamp values.
Reference: https://developer.mozilla.org/en-US/docs/Web/API/Resource_Timing_API/Using_the_Resource_Timing_API#Coping_with_CORS
HAR Export Trigger by Firebug might be a tool you'd want to consider. This Firefox-only extension will export all network/performance data into a .har file, which you can similarly parse:
...
"content": {
"size": 33,
"compression": 0,
"mimeType": "text/html; charset=utf-8",
"text": "PGh0bWw+PGhlYWQ+PC9oZWFkPjxib2R5Lz48L2h0bWw+XG4=",
"encoding": "base64",
"comment": ""
}
...

Related

Usage of Document Extraction cognitive skill

I am trying to utilize Azure Cognitive services to perform basic document extraction.
My intent is to input PDFs and DOCXs (and possibly some other files) into the Cognitive Engine for parsing, but unfortunately, the implementation of this is not as simple as it seems.
According to the documentation (https://learn.microsoft.com/en-us/azure/search/cognitive-search-skill-document-extraction#sample-definition), I must define the skill and then I should be able to input files, but there is no examples on how this should be done.
So far I have been able to define the skill but I am still not sure where I should be dropping the files into.
Please see my code below, as it seeks to replicate the same data structure shown in the example code (albeit using the C# Library)
public static DocumentExtractionSkill CreateDocumentExtractionSkill()
{
List<InputFieldMappingEntry> inputMappings = new List<InputFieldMappingEntry>
{
new("file_data") {Source = "/document/file_data"}
};
List<OutputFieldMappingEntry> outputMappings = new List<OutputFieldMappingEntry>
{
new("content") {TargetName = "extracted_content"}
};
DocumentExtractionSkill des = new DocumentExtractionSkill(inputMappings, outputMappings)
{
Description = "Extract text (plain and structured) from image",
ParsingMode = BlobIndexerParsingMode.Text,
DataToExtract = BlobIndexerDataToExtract.ContentAndMetadata,
Context = "/document",
};
return des;
}
And then I build on this skill like so:
_indexerClient = new SearchIndexerClient(new Uri(Environment.GetEnvironmentVariable("SearchEndpoint")), new AzureKeyCredential(Environment.GetEnvironmentVariable("SearchKey"));
List<SearchIndexerSkill> skills = new List<SearchIndexerSkill> { Skills.DocExtractionSkill.CreateDocumentExtractionSkill() };
SearchIndexerSkillset skillset = new SearchIndexerSkillset("DocumentSkillset", skills)
{
Description = "Document Cracker Skillset",
CognitiveServicesAccount = new CognitiveServicesAccountKey(Environment.GetEnvironmentVariable("CognitiveServicesKey"))
};
await _indexerClient.CreateOrUpdateSkillsetAsync(skillset);
And... then what?
There is no clear method that would fit what I believe the next stage, actually parsing documents.
What is the next step from here to begin dumping files into the _indexerClient (of type SearchIndexerClient)?
As the next stage shown in the documentation is:
{
"values": [
{
"recordId": "1",
"data":
{
"file_data": {
"$type": "file",
"data": "aGVsbG8="
}
}
}
]
}
Which is not clear as to where I would be doing this.
According to the document that you have mentioned. They are actually trying to get the output through postman. They are using a GET Method to receive the extracted document content by sending JSON request to the mentioned URL(i.e. Cognitive skill url) and the files/documents are needed to be uploaded to your storage account in order to get extracted.
you can follow this tutorial to get more insights.

How to set MongoDB Change Stream 'OperationType' in the C# driver?

When running the new MongDB Server, version 3.6, and trying to add a Change Stream watch to a collection to get notifications of new inserts and updates of documents, I only receive notifications for updates, not for inserts.
This is the default way I have tried to add the watch:
IMongoDatabase mongoDatabase = mongoClient.GetDatabase("Sandbox");
IMongoCollection<BsonDocument> collection = mongoDatabase.GetCollection<BsonDocument>("TestCollection");
var changeStream = collection.Watch().ToEnumerable().GetEnumerator();
changeStream.MoveNext();
var next = changeStream.Current;
Then I downloaded the C# source code from MongoDB to see how they did this. Looking at their test code for change stream watches, they create a new document(Insert) and then change that document right away(Update) and THEN set up the Change Stream watch to receive an 'update' notification.
No example is given on how to watch for 'insert' notifications.
I have looked at the Java and NodeJS examples, both on MongoDB website and SO, which seems to be straight forward and defines a way to see both Inserts and Updates:
var changeStream = collection.watch({ '$match': { $or: [ { 'operationType': 'insert' }, { 'operationType': 'update' } ] } });
The API for the C# driver is vastly different, I would have assumed they would have kept the same API for C# as Java and NodeJS. I found no or very few examples for C# to do the same thing.
The closest I have come is with the following attempt but still fails and the documentation for the C# version is very limited (or I have not found the right location). Setup is as follows:
String json = "{ '$match': { 'operationType': { '$in': ['insert', 'update'] } } }";
var options = new ChangeStreamOptions { FullDocument = ChangeStreamFullDocumentOption.UpdateLookup };
PipelineDefinition<ChangeStreamDocument<BsonDocument>, ChangeStreamDocument<BsonDocument>> pipeline = new EmptyPipelineDefinition<ChangeStreamDocument<BsonDocument>>().Match(Builders<ChangeStreamDocument<BsonDocument>>.Filter.Text(json,"json"));
Then running the statement below throws an Exception:
{"Command aggregate failed: $match with $text is only allowed as the
first pipeline stage."}
No other Filter options has worked either, and I have not found a way to just enter the JSON as a string to set the 'operationType'.
var changeStream = collection.Watch(pipeline, options).ToEnumerable().GetEnumerator();
changeStream.MoveNext();
var next = changeStream.Current;
My only goal here is to be able to set the 'operationType' using the C# driver. Does anyone know what I am doing wrong or have tried this using the C# driver and had success?
After reading though a large number of webpages, with very little info on the C# version of the MongoDB driver, I am very stuck!
Any help would be much appreciated.
Here is a sample of code I've used to update the collection Watch to retrieve "events" other than just document updates.
IMongoDatabase sandboxDB = mongoClient.GetDatabase("Sandbox");
IMongoCollection<BsonDocument> collection = sandboxDB.GetCollection<BsonDocument>("TestCollection");
//Get the whole document instead of just the changed portion
ChangeStreamOptions options = new ChangeStreamOptions() { FullDocument = ChangeStreamFullDocumentOption.UpdateLookup };
//The operationType can be one of the following: insert, update, replace, delete, invalidate
var pipeline = new EmptyPipelineDefinition<ChangeStreamDocument<BsonDocument>>().Match("{ operationType: { $in: [ 'replace', 'insert', 'update' ] } }");
var changeStream = collection.Watch(pipeline, options).ToEnumerable().GetEnumerator();
changeStream.MoveNext(); //Blocks until a document is replaced, inserted or updated in the TestCollection
ChangeStreamDocument<BsonDocument> next = changeStream.Current;
enumerator.Dispose();
The EmptyPiplineDefinition...Match() argument could also be:
"{ $or: [ {operationType: 'replace' }, { operationType: 'insert' }, { operationType: 'update' } ] }"
If you wanted to use the $or command, or
"{ operationType: /^[^d]/ }"
to throw a little regex in there. This last one is saying, I want all operationTypes unless they start with the letter 'd'.

Receiving list of hubs with Autodesk Forge API

I am trying to use the Autodesk forge API for C# to get a list of Hubs.
This is what I have done so far:
HubsApi api = new HubsApi();
Hubs hubs = api.GetHubs();
Pretty simple. But when I do so, I get an exception wicht complains, that one cannot convert from DynamicJsonResponse to Hubs. I think, this is because I receive two warnings in my response string and so it is not a Hub Object any longer. The warning looks like this:
"warnings":[
{
"Id":null,
"HttpStatusCode":"403",
"ErrorCode":"BIM360DM_ERROR",
"Title":"Unable to get hubs from BIM360DM EMEA.",
"Detail":"You don't have permission to access this API",
"AboutLink":null,
"Source":[
],
"meta":[
]
}
All of this is wrapped in an Dictionary with four entries and only two of them are data. However, according to Autodesk, this warning can be ignored.
So after that I tried to convert it in a Dictionary and only select the data entry
HubsApi api = new HubsApi();
DynamicJsonResponse resp = api.GetHubs();
DynamicDictionary hubs = (DynamicDictionary)resp.Dictionary["data"];
Then I looped through it:
for(int i = 0; i < hubs.Dictionary.Count && bim360hub == null; i++)
{
string hub = hubs.Dictionary.ElementAt(i).ToString();
[....]
}
But the string hub isn't an json-hub either. It is an array which looks like this:
[
0,
{
"type": "hubs",
"id": "****",
"attributes": {...},
"links": {...},
"relationships": {...},
}
]
And the second element in the array is my hub. I know, how I can select the second element. But there must be a much easier to get the list of hubs.
In an example in the references it seemd to work with these simple two lines of code:
HubsApi api = new HubsApi();
Hubs hubs = api.GetHubs();
Any ideas, how I manage to get my hubs?
First, consider using the Async version of those methods, avoid using non-async calls as it causes your desktop app to freeze (while is connecting) or allocates more resources on ASP.NET.
The following function is part of this sample, which lists all hubs, projects and files under a user account. It's a good place to start. Note it's organizing the Hubs in a TreeNode list, which is compatible with jsTree.
private async Task<IList<TreeNode>> GetHubsAsync()
{
IList<TreeNode> nodes = new List<TreeNode>();
HubsApi hubsApi = new HubsApi();
hubsApi.Configuration.AccessToken = AccessToken;
var hubs = await hubsApi.GetHubsAsync();
string urn = string.Empty;
foreach (KeyValuePair<string, dynamic> hubInfo in new DynamicDictionaryItems(hubs.data))
{
string nodeType = "hubs";
switch ((string)hubInfo.Value.attributes.extension.type)
{
case "hubs:autodesk.core:Hub":
nodeType = "hubs";
break;
case "hubs:autodesk.a360:PersonalHub":
nodeType = "personalhub";
break;
case "hubs:autodesk.bim360:Account":
nodeType = "bim360hubs";
break;
}
TreeNode hubNode = new TreeNode(hubInfo.Value.links.self.href, (nodeType == "bim360hubs" ? "BIM 360 Projects" : hubInfo.Value.attributes.name), nodeType, true);
nodes.Add(hubNode);
}
return nodes;
}

How to set permissions to public, Elastic Transcoder Amazon SDK

How I can make all files public using Amazon.ElasticTranscoder.Model (.NET, C#).
Here is my code:
public static void CreateJobRequest(string videoPath, string bucketName)
{
string accsessKey = CloudSettings.AccessKeyID;
string secretKey = CloudSettings.SecreteKey;
var etsClient = new AmazonElasticTranscoderClient(accsessKey,secretKey, RegionEndpoint.USEast1);
var notifications = new Notifications()
{
Completed = "arn:aws:sns:us-east-1:XXXXXXXXXXXX:Transcode",
Error = "arn:aws:sns:us-east-1:XXXXXXXXXXXX:Transcode",
Progressing = "arn:aws:sns:us-east-1:XXXXXXXXXXXX:Transcode",
Warning = "arn:aws:sns:us-east-1:XXXXXXXXXXXX:Transcode"
};
var pipeline=etsClient.CreatePipeline(new CreatePipelineRequest()
{
Name = "MyFolder",
InputBucket = bucketName,
OutputBucket = bucketName,
Notifications = notifications,
Role = "arn:aws:iam::XXXXXXXXXXXX:role/Elastic_Transcoder_Default_Role"
}).CreatePipelineResult.Pipeline;
etsClient.CreateJob(new CreateJobRequest()
{
PipelineId = pipeline.Id,
Input = new JobInput()
{
AspectRatio = "auto",
Container = "mp4",
FrameRate = "auto",
Interlaced = "auto",
Resolution = "auto",
Key = videoPath
},
Output = new CreateJobOutput()
{
ThumbnailPattern = videoPath+"videoName{resolution}_{count}",
Rotate = "0",
PresetId = "1351620000000-000020",
Key = videoPath+"newFileName.mp4"
}
});
}
Everything works perfect, but transcoded files are private. How I can set it to public?
I've just had this issue today and the way to resolve it is as follows:
In your Pipeline under "Configure Amazon S3 Bucket for Transcoded Files and Playlists"
Use the "+ Add a permission link".
Select "Grantee Type" as "Amazon S3 Group".
Select "Grantee" as "All Users".
Then check "Access" as "Open/Download" AND "View Permission"
Save changes.
You can repeat this for any thumbnails that are generated in the section directly beneath: "Configure Amazon S3 Bucket for Thumbnails".
Just to add to #timstermatic answer - I only had to grant 'Open/Download' access to make the objects public.
The 'View Permissions' option is used to allow anyone to read the ACL (Access Control List) of the object, not to view the object itself - that's taken care of by the 'Open/Download' option.
As usual with AWS terms, it's easy to misinterpret - it's not 'Permission to View the Object', it's 'View the Object's Permissions'.
(Sorry I couldn't add this as a comment, I don't have enough rep.)
Usually a conflict in policies can give rise to this situation. An easier way to handle this is to set the public read permission straight on the bucket in the "Bucket Policy" section of the Permissions tab. This bit of code might help.
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "AllowPublicRead",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::<BUCKET_NAME>/*"
}
]
}
You can specify the folder as well in the Resource parameter. Haven't tried extensions but I guess those should work too.

How do you use the CopyIntoItems method of the SharePoint Copy web service?

I am attempting to load document files into a document library in SharePoint using the CopyIntoItems method of the SharePoint Copy web service.
The code below executes and returns 0 (success). Also, the CopyResult[] array returns 1 value with a "Success" result. However, I cannot find the document anywhere in the library.
I have two questions:
Can anyone see anything wrong with my code or suggest changes?
Can anyone suggest how I could debug this on the server side. I don't have a tremendous amount of experience with SharePoint. If I can track what is going on through logging or some other method on the server side it may help me figure out what is going on.
Code Sample:
string[] destinationUrls = { Uri.EscapeDataString("https://someaddress.com/Reports/Temp") };
SPCopyWebService.FieldInformation i1 = new SPCopyWebService.FieldInformation { DisplayName = "Name", InternalName = "Name", Type = SPListTransferSpike1.SPCopyWebService.FieldType.Text, Value = "Test1Name" };
SPCopyWebService.FieldInformation i2 = new SPCopyWebService.FieldInformation { DisplayName = "Title", InternalName = "Title", Type = SPListTransferSpike1.SPCopyWebService.FieldType.Text, Value = "Test1Title" };
SPCopyWebService.FieldInformation[] info = { i1, i2 };
SPCopyWebService.CopyResult[] result;
byte[] data = File.ReadAllBytes("C:\\SomePath\\Test1Data.txt");
uint ret = SPCopyNew.CopyIntoItems("", destinationUrls, info, data, out result);
Edit that got things working:
I got my code working by adding "http://null" to the SourceUrl field. Nat's answer below would probably work for that reason. Here is the line I changed to get it working.
// Change
uint ret = SPCopyNew.CopyIntoItems("http://null", destinationUrls, info, data, out result);
I think the issue may be in trying to set the "Name" property using the webservice. I have had some fail doing that.
Given the "Name" is the name of the document, you may have some success with
string targetDocName = "Test1Name.txt";
string destinationUrl = Uri.EscapeDataString("https://someaddress.com/Reports/Temp/" + targetDocName);
string[] destinationUrls = { destinationUrl };
SPCopyWebService.FieldInformation i1 = new SPCopyWebService.FieldInformation { DisplayName = "Title", InternalName = "Title", Type = SPListTransferSpike1.SPCopyWebService.FieldType.Text, Value = "Test1Title" };
SPCopyWebService.FieldInformation[] info = { i1};
SPCopyWebService.CopyResult[] result;
byte[] data = File.ReadAllBytes("C:\\SomePath\\Test1Data.txt");
uint ret = SPCopyNew.CopyIntoItems(destinationUrl, destinationUrls, info, data, out result);
Note: I have used the "target" as the "source" property. Don't quite know why, but it does the trick.
I didn't understand very well what you're tying to do, but if you're trying to upload a file from a local directory into a sharepoint library, i would suggest you create a webclient and use uploadata:
Example (VB.NET):
dim webclient as Webclient
webClient.UploadData("http://srvasddress/library/filenameexample.doc", "PUT", filebytes)
Then you just have to check in the file using the lists web service, something like:
listService.CheckInFile("http://srvasddress/library/filenameexample.doc", "description", "1")
Hope it was of some help.
EDIT: Don't forget to set credentials for the web client, etc.
EDIT 2: Update metada fields using this:
listService.UpdateListItems("Name of the Library, batchquery)
You can find info on building batch query's in here: link
The sourceurl is used in Sharepoint. It is a link back to the "Source Document." When in your document library, hover over the item, to the right appears a down pointing triangle. Clicking on it, brings up a menu. Click on the "View Properties" Option. On this page you will see the following "This item is a copy of http://null ( Go To Source Item | Unlink )"
Because we are using the Copy function Sharepoint is keeping track of the "Source item" as part of the Document Management feature.

Categories

Resources