I'm trying to make a command line utility for initializing a InfluxDB database, but I'm pretty new to influx, and C# in general.
With the following response from the Influx DB Database, I'm trying to pretty print this in the console window.
Ideally I would have errors show up in the standard error buffer, and warnings or info's show up in the standard output.
However, when running the code below in a debug environment, messages appears to be in an incorrect format according to several jsonpath checkers that I have used.
JSON input as result.Body
{
"results": [
{
"statement_id": 0,
"messages": [
{
"level": "warning",
"text": "deprecated use of 'CREATE RETENTION POLICY Primary ON SensorData DURATION 30d REPLICATION 1' in a read only context, please use a POST request instead"
}
]
}
]
}
JSON output as messages prior to transformation:
messages {{
"level": "warning",
"text": "deprecated use of 'CREATE RETENTION POLICY Primary ON SensorData DURATION 30d REPLICATION 1' in a read only context, please use a POST request instead"
}} Newtonsoft.Json.Linq.JToken {Newtonsoft.Json.Linq.JObject}
As you can see, the messages output is in a nested object {{}} rather then an array as expected...
According to https://jsonpath.curiousconcept.com/ and several other jsonpath checkers, I was expecting something similar to:
[
{
"level":"warning",
"text":"deprecated use of 'CREATE RETENTION POLICY Primary ON SensorData DURATION 30d REPLICATION 1' in a read only context, please use a POST request instead"
}
]
C#
private static void PrintResult(IInfluxDataApiResponse result)
{
var output = result.Success ? System.Console.Out : System.Console.Error;
output.WriteLine("["+result.StatusCode + "] : "+result.Body);
var json = JObject.Parse(result.Body);
var messages = json.SelectToken("$.results[*].messages[*]"); //outputs an array of messages if exists. e.g. [{level:warning,text:test}]
if (messages != null)
{
var transformed = messages.Select(m => new { level = (string)m["level"], text = (string)m["text]"] }).ToList();
foreach (var msg in transformed)
{
output.WriteLine($"[{result.StatusCode}] : {msg.level} - {msg.text}");
}
}
}
For my uses at least, using var messages =
json.SelectTokens("$.results[*].messages[*]");
rather then
json.SelectToken("$.results[*].messages[*]");
allowed me to workaround the issue, as I could then treat the result as a C# enumerable, as opposed to special casing 1 result vs many results for SelectToken as it seems to flatten single results into an object, where as other implementations would have it be an array.
Related
I am trying to utilize Azure Cognitive services to perform basic document extraction.
My intent is to input PDFs and DOCXs (and possibly some other files) into the Cognitive Engine for parsing, but unfortunately, the implementation of this is not as simple as it seems.
According to the documentation (https://learn.microsoft.com/en-us/azure/search/cognitive-search-skill-document-extraction#sample-definition), I must define the skill and then I should be able to input files, but there is no examples on how this should be done.
So far I have been able to define the skill but I am still not sure where I should be dropping the files into.
Please see my code below, as it seeks to replicate the same data structure shown in the example code (albeit using the C# Library)
public static DocumentExtractionSkill CreateDocumentExtractionSkill()
{
List<InputFieldMappingEntry> inputMappings = new List<InputFieldMappingEntry>
{
new("file_data") {Source = "/document/file_data"}
};
List<OutputFieldMappingEntry> outputMappings = new List<OutputFieldMappingEntry>
{
new("content") {TargetName = "extracted_content"}
};
DocumentExtractionSkill des = new DocumentExtractionSkill(inputMappings, outputMappings)
{
Description = "Extract text (plain and structured) from image",
ParsingMode = BlobIndexerParsingMode.Text,
DataToExtract = BlobIndexerDataToExtract.ContentAndMetadata,
Context = "/document",
};
return des;
}
And then I build on this skill like so:
_indexerClient = new SearchIndexerClient(new Uri(Environment.GetEnvironmentVariable("SearchEndpoint")), new AzureKeyCredential(Environment.GetEnvironmentVariable("SearchKey"));
List<SearchIndexerSkill> skills = new List<SearchIndexerSkill> { Skills.DocExtractionSkill.CreateDocumentExtractionSkill() };
SearchIndexerSkillset skillset = new SearchIndexerSkillset("DocumentSkillset", skills)
{
Description = "Document Cracker Skillset",
CognitiveServicesAccount = new CognitiveServicesAccountKey(Environment.GetEnvironmentVariable("CognitiveServicesKey"))
};
await _indexerClient.CreateOrUpdateSkillsetAsync(skillset);
And... then what?
There is no clear method that would fit what I believe the next stage, actually parsing documents.
What is the next step from here to begin dumping files into the _indexerClient (of type SearchIndexerClient)?
As the next stage shown in the documentation is:
{
"values": [
{
"recordId": "1",
"data":
{
"file_data": {
"$type": "file",
"data": "aGVsbG8="
}
}
}
]
}
Which is not clear as to where I would be doing this.
According to the document that you have mentioned. They are actually trying to get the output through postman. They are using a GET Method to receive the extracted document content by sending JSON request to the mentioned URL(i.e. Cognitive skill url) and the files/documents are needed to be uploaded to your storage account in order to get extracted.
you can follow this tutorial to get more insights.
I'm working with Azure Functions for the first time. I'm trying to write a simple function which responds to documents changed or added to a CosmosDb collection. The function I've written looks like this:
[FunctionName("ChangeLog")]
public static void Run([CosmosDBTrigger(
databaseName: "Recaptcha",
collectionName: "Rules",
ConnectionStringSetting = "CosmosDBConnection",
LeaseCollectionName = null)]IReadOnlyList<RuleConfigCollection> documents)
{
if (documents != null && documents.Count > 0)
{
ApplicationEventLogger.Write(
Diagnostics.ApplicationEvents.RecaptchaRulesChanged,
new Dictionary<string, object>()
{
{ "SomeEnrichment", documents[0].Rules.ToList().Count.ToString() }
});
}
}
By my understanding a lease collection is necessary when multiple functions are pointed at the same CosmosDb, but in my case this isn't relevant. That's why I've set the lease collection to null.
I've published this to Azure from Visual Studio and can see the function is created with the following function.json:
{
"generatedBy": "Microsoft.NET.Sdk.Functions-1.0.12",
"configurationSource": "attributes",
"bindings": [
{
"type": "cosmosDBTrigger",
"connectionStringSetting": "CosmosDBConnection",
"collectionName": "Rules",
"databaseName": "Recaptcha",
"leaseDatabaseName": "Recaptcha",
"createLeaseCollectionIfNotExists": false,
"name": "documents"
}
],
"disabled": false,
"scriptFile": "../bin/My.Namespace.Functions.App.dll",
"entryPoint": "My.Namespace.Functions.App.ChangeLogFunction.Run"
}
I've also added an application setting named CosmosDBConnection with the value AccountEndpoint=https://my-cosmosdb.documents.azure.com:443;AccountKey=myAccountKey;.
I run the function then add a document to the collection, but the logs just keep saying No new trace in the past n min(s) and the application events I expect to see are not being written.
Have I missed something in this setup?
I'm not sure that's the root cause of you issue, but your understanding of leaseCollection is wrong.
leaseCollection is used to coordinate multiple instances (workers) of your Function App to distribute partitions between workers.
It is required even for a single Function listening to Cosmos DB change feed.
When running the new MongDB Server, version 3.6, and trying to add a Change Stream watch to a collection to get notifications of new inserts and updates of documents, I only receive notifications for updates, not for inserts.
This is the default way I have tried to add the watch:
IMongoDatabase mongoDatabase = mongoClient.GetDatabase("Sandbox");
IMongoCollection<BsonDocument> collection = mongoDatabase.GetCollection<BsonDocument>("TestCollection");
var changeStream = collection.Watch().ToEnumerable().GetEnumerator();
changeStream.MoveNext();
var next = changeStream.Current;
Then I downloaded the C# source code from MongoDB to see how they did this. Looking at their test code for change stream watches, they create a new document(Insert) and then change that document right away(Update) and THEN set up the Change Stream watch to receive an 'update' notification.
No example is given on how to watch for 'insert' notifications.
I have looked at the Java and NodeJS examples, both on MongoDB website and SO, which seems to be straight forward and defines a way to see both Inserts and Updates:
var changeStream = collection.watch({ '$match': { $or: [ { 'operationType': 'insert' }, { 'operationType': 'update' } ] } });
The API for the C# driver is vastly different, I would have assumed they would have kept the same API for C# as Java and NodeJS. I found no or very few examples for C# to do the same thing.
The closest I have come is with the following attempt but still fails and the documentation for the C# version is very limited (or I have not found the right location). Setup is as follows:
String json = "{ '$match': { 'operationType': { '$in': ['insert', 'update'] } } }";
var options = new ChangeStreamOptions { FullDocument = ChangeStreamFullDocumentOption.UpdateLookup };
PipelineDefinition<ChangeStreamDocument<BsonDocument>, ChangeStreamDocument<BsonDocument>> pipeline = new EmptyPipelineDefinition<ChangeStreamDocument<BsonDocument>>().Match(Builders<ChangeStreamDocument<BsonDocument>>.Filter.Text(json,"json"));
Then running the statement below throws an Exception:
{"Command aggregate failed: $match with $text is only allowed as the
first pipeline stage."}
No other Filter options has worked either, and I have not found a way to just enter the JSON as a string to set the 'operationType'.
var changeStream = collection.Watch(pipeline, options).ToEnumerable().GetEnumerator();
changeStream.MoveNext();
var next = changeStream.Current;
My only goal here is to be able to set the 'operationType' using the C# driver. Does anyone know what I am doing wrong or have tried this using the C# driver and had success?
After reading though a large number of webpages, with very little info on the C# version of the MongoDB driver, I am very stuck!
Any help would be much appreciated.
Here is a sample of code I've used to update the collection Watch to retrieve "events" other than just document updates.
IMongoDatabase sandboxDB = mongoClient.GetDatabase("Sandbox");
IMongoCollection<BsonDocument> collection = sandboxDB.GetCollection<BsonDocument>("TestCollection");
//Get the whole document instead of just the changed portion
ChangeStreamOptions options = new ChangeStreamOptions() { FullDocument = ChangeStreamFullDocumentOption.UpdateLookup };
//The operationType can be one of the following: insert, update, replace, delete, invalidate
var pipeline = new EmptyPipelineDefinition<ChangeStreamDocument<BsonDocument>>().Match("{ operationType: { $in: [ 'replace', 'insert', 'update' ] } }");
var changeStream = collection.Watch(pipeline, options).ToEnumerable().GetEnumerator();
changeStream.MoveNext(); //Blocks until a document is replaced, inserted or updated in the TestCollection
ChangeStreamDocument<BsonDocument> next = changeStream.Current;
enumerator.Dispose();
The EmptyPiplineDefinition...Match() argument could also be:
"{ $or: [ {operationType: 'replace' }, { operationType: 'insert' }, { operationType: 'update' } ] }"
If you wanted to use the $or command, or
"{ operationType: /^[^d]/ }"
to throw a little regex in there. This last one is saying, I want all operationTypes unless they start with the letter 'd'.
I am currently creating a small Text-Based Game. In this there are obviously multiple rooms, I wish to load those rooms from a JSON file. I am currently doing that as such:
dynamic jRooms = Json.Decode(file);
for (int i = 0; i < Regex.Matches( file, "Room" ).Count; i++){
name[i] = jRooms.Game.Room[i];
description[i] = jRooms.Game.Room.Attributes.Description[i];
exits[i] = jRooms.Game.Room.Attributes.Exits[i];
_count++;
}
That loads information from the following JSON file:
{
'Game': [{
'Room': 'Vault 111 Freeze Chamber',
'Attributes': {
'Description': 'The freeze chamber of the vault you entered after the nuclear fallout.',
'Exits': 'North.Vault 111: Main Hallway'
},
'Room': 'Vault 111 Main Hallway',
'Attributes': {
'Description': 'The main hallway of the vault.',
'Exits': 'South.Vault 111: Freeze Chamber'
}
}]}
This unfortunately throws up an error during run time that I can't seem to work out, which is the following:
Microsoft.CSharp.RuntimeBinder.RuntimeBinderException: Cannot perform runtime binding on a null reference
at CallSite.Target(Closure , CallSite , Object , Int32 )
at System.Dynamic.UpdateDelegates.UpdateAndExecute2[T0,T1,TRet](CallSite site, T0 arg0, T1 arg1)
at TBA.Loader.Rooms()
at TBA.Program.Main(String[] args)
Any help would be greatly appreciated, because I am completely stumped as to what is wrong and not working. If you need anymore of my code, just request it.
Thanks.
The problem is with your JSON. JSON doesn't allow single quotes (maybe they have a different meaning or no meaning at all). Source - W3Schools.
Use services like JSONLint to validate JSON and check for errors. Even JSONLint declares your JSON, invalid. Using double quotes however, it is declared valid. You should use double quotes like this:
{
"Game": [
{
"Room": "Vault111FreezeChamber",
"Attributes": {
"Description": "Thefreezechamberofthevaultyouenteredafterthenuclearfallout.",
"Exits": "North.Vault111: MainHallway"
},
"Room": "Vault111MainHallway",
"Attributes": {
"Description": "Themainhallwayofthevault.",
"Exits": "South.Vault111: FreezeChamber"
}
}
]
}
I have the below JSON (been snipped for space), as you can see in the "test" and "tooltip" I have a property that needs to contain a function "formatter" (note this JSON is read in from an XML file and converted to JSON in .NET)
{
"test": {
"formatter": function(){return '<b>'+ this.point.name +'<\/b>: '+ this.y +' %';}
},
"title": {
"align": "center",
"text": "Your chart title here"
},
"tooltip": {
"formatter": function(){return '<b>'+ this.point.name +'<\/b>: '+ this.y +' %';}
}
}
Unfortunatly I'm getting an error on the ASPX page that produces the JSON file
There was an error parsing the JSON document. The document may not be well-formed.
This error is due to the fact that the bit after the "formatter" is not in quotation marks as it thinks it's a string. but If I put a string around it then the front end html page that uses the JSON doesn't see the function.
Is it possible to pass this as a function and not a string?
Many thanks.
Edit:
Thanks for the quick replys. As I said I know that the above isn't correct JSON due to the fact that the "function(){...}" part isn't in quote marks. The front end that reads the JSON file is 3rd party so I was wondering how I could pass the function through, I understand about the problems of injection (from a SQL point of view) and understand why it's not possible in JSON (not worked with JSON before).
If you passed it as a string you could use Javascripts EVAL function, but EVAL is EVIL.
What about meeting it half way and using Object Notation format ?
This is a template jquery plugin that I use at work, the $.fn.extend shows this notation format.
/*jslint browser: true */
/*global window: true, jQuery: true, $: true */
(function($) {
var MyPlugin = function(elem, options) {
// This lets us pass multiple optional parameters to your plugin
var defaults = {
'text' : '<b>Hello, World!</b>',
'anotherOption' : 'Test Plugin'
};
// This merges the passed options with the defaults
// so we always have a value
this.options = $.extend(defaults, options);
this.element = elem;
};
// Use function prototypes, it's a lot faster.
// Loads of sources as to why on the 'tinternet
MyPlugin.prototype.Setup = function()
{
// run Init code
$(this.element).html(this.options.text);
};
// This actually registers a plugin into jQuery
$.fn.extend({
// by adding a jquery.testPlugin function that takes a
// variable list of options
testPlugin: function(options) {
// and this handles that you may be running
// this over multiple elements
return this.each(function() {
var o = options;
// You can use element.Data to cache
// your plugin activation stopping
// running it again;
// this is probably the easiest way to
// check that your calls won't walk all
// over the dom.
var element = $(this);
if (element.data('someIdentifier'))
{
return;
}
// Initialise our plugin
var obj = new MyPlugin(element, o);
// Cache it to the DOM object
element.data('someIdentifier', obj);
// Call our Setup function as mentioned above.
obj.Setup();
});
}
});
})(jQuery);