Bit longer question ahead but please bear with me: A CosmosDB PM explains in this feedback thread:
In Cosmos DB, all of the resources e.g. databases, collections/tables/graphs, users, permissions, documents/items/nodes/edges, attachments are all runtime resources. You can CRUD/query these resources using runtime SDKs and REST APIs. [...] All of the “runtime resources” [...] are meant to be used by the developers directly inside their applications.
The only resource which is meant for administrative purposes is the “database account”. This resource is exposed via ARM.
So only the resource "CosmosDB" can be provisioned using ARM (Azure Resource Manager), e.g. in a CI/CD pipeline for instance with Azure DevOps.
So now my question is: What is the proper way to create the Database(s) and Collection(s) inside a CosmosDB account?
Let's say I am using am using an Azure Function that stores/reads data from a CosmosDB. Using the Function Binding I could for example use
[DocumentDB("ToDoList", "Migration", ConnectionStringSetting = "CosmosDB", CreateIfNotExists = true)] IAsyncCollector<Document> documentsToStore)
to create db and collection. Instantiating the DocumentClient manually I could use
await client.CreateDatabaseIfNotExistsAsync(database);
But: Is this the proper way to do this?? Doing it for instance in the Function binding would mean that the collection wouldn't be created until the Function is executed for the first time. That just does not feel right.
Or should one instead use for instance a Powershell script in the deployment pipeline to create db and collection after the ARM script was deployed? This is of course doable but if that would be the recommended way, one can very much argue that this should be exposed through ARM.
The answer to this question is entirely dependent on the intent of the function you are trying to build I feel. If the function assumes it can read/update/delete documents in a collection that might or might not exist then yes it a good practice to challenge the collection and then create it on the fly if it does not exist yet.
If the collection needs to be created because other tools depend on it and it is a predictable collection you can create them beforehand but if the collections name or number cannot be predicted then you are in a bind.
Related
Wondering if it's possible for an Azure Function to set a value of an app setting.
For example whilst developing locally or in production one can read custom settings by binding it to a class
builder.Services.AddOptions<SomeSettingClass>()
.Configure<IConfiguration>((settings, configuration) =>
{ configuration.GetSection(nameof(SomeSettingClass)).Bind(settings);
});
and obviously use the settings at the main function method. However, is it possible to set a value and persist it for the next run?
There many solutions to this, Azure has Azure app configuration. it basically has all the plumbing and provides the appropriate configuration sources and providers to plugin into the ConfigurationRoot. This enables you use use the IOptionsMonitor feature to get real-time updates of configuration changes. Also there should be away to programmatically update it, however I have never tried it.
However, I find that service completely overpriced and clunky for what it is.
On that note, if you are handy with a bit of code, you could create your own ConfigurationSource and ConfigurationProvider to point to anywhere you like, for instance a CosmosDb container and do exactly the same thing. You could even use a trigger and signalR to publish those changes to any listening ConfigurationSource.
There are also various ways you can update a functions or apps configuration at runtime through the CLI etc (I am not sure if there C# management libraries for this, though I guess there is somewhere). However, they will not be real-time and have no inbuilt way of notifying as far as I know.
Which leaves you with the option of just doing a real time look up on a datastore ¯\_(ツ)_/¯. This way you can update and query at your own will, and will persist naturally.
I am considering using the Elsa workflows for a project, but I couldn't find any examples or documentation on how to use it in client applications (xamarin.forms/blazor wasm). My idea idea is to basically define workflows that include also screen transitions in the client apps. Is this a relevant scenario for Elsa, or am not getting it? I understand that there is some REST API available, but no idea how to use it.
This great article explains how to use it in ASP.NET/backend scenarios https://sipkeschoorstra.medium.com/building-workflow-driven-net-core-applications-with-elsa-139523aa4c50
That's a great use case for Elsa and is something I am planning to create a sample application + guide for. So far, there are guides and samples about executing long-running "back-end" processes using Elsa, but there is now reason one couldn't also use it to implement application navigation logic such as wizards consisting of steps implemented as individual screens for example.
So that's your answer: yes, it is a relevant scenario. But it is unfortunate that there are no concrete samples to point you to at the moment.
Barring any samples, here's how it might work in a client application:
The client application has Elsa services configured.
Whether you decide to store workflow within the app (as code or JSON) or on a remote Elsa Server instance doesn't matter - once you have a workflow in memory, you can execute it.
Since your workflows will be driving UI, you have to think of how tightly-coupled the workflow will be with that UI. For example, a highly tight-coupled workflow might include activities that represent view (names) to present, including transition configuration if that is something to be configured, and outcomes based on what buttons were clicked. A highly loose-coupled workflow on the other hand might act more as a "conductor" or orchestrator of actions and events, where the workflow consists of nothing more than a bunch of primitives such as "SendCommand" and "Event Received", where a "SendCommand" simply raises some application event with a task name that your application then handles. The "Event Received" activity handles the other way around: your application fires instructions to Elsa, and Elsa drives the workflow. A task might be a "Navigate" instruction with the next view name provided as a parameter.
The "SendCommand" and "EventReceived" activities are very new and part of Elsa 2.1 preview packages. Right now they are directly coupled to webhook scenarios (where the commands are sent in the form of HTTP requests to an external application), but the goal is to have various strategies in place (HTTP out requests would just be one of them, another one might be a simple mediator pattern for in-process scenarios such as your client application one).
UPDATE
To retrieve workflows designed in the designer into your client app, you need to get the workflow definition via the following API endpoint:
http(s)://your-elsa-server/v1/workflow-definitions/{workflow-definition-id}/Published
What you'll get back is a JSON representing the workflow definition, which you can now deserialize using IContentSerializer.Deserialize<WorkflowDefinition>, which will give you a WorkflowDefinition. But to be able to actually run a workflow, you need a workflow blueprint. To turn the workflow definition into a blueprint, use `IWorkflowBlueprintMaterializer.CreateWorkflowBlueprintAsync(WorkflowDefinition).
Which will give you a blueprint that can then be executed using e.g. IStartsWorkflow.StartWorkflowAsync(IWorkflowBlueprint).
There are various other services that make it more convenient to construct and run workflows.
To make this as frictionless as possible for your client app, you could consider simply implementing IWorkflowProvider, of which we currently have 3 out of the box:
ProgrammaticWorkflowProvider: provides workflow blueprints based on the workflows coded with the fluent Workflow Builder API.
DatabaseWorkflowProvider: provides blueprints based on those stored in the database (JSON models stored by the designer).
StorageWorkflowProvider: provides blueprints based on JSON files stored on some hard drive or blob storage such as Azure Blob Storage.
What you might do, and in fact what I think we should provide out of the box now that you made me think of it, is create a fourth provider that uses the API endpoints to get workflows from.
Then your client app should not have to be bothered with invoking the Elsa API - the provider does it for you.
So we built this library/framework thing full of code related to business processes and common elements that are shared across multiple applications (C#, .Net 4.7.1,WPF, MVVM). Our Logging stuff is all set up through this framework so naturally it felt like the best place for Sentry. All the references in our individual applications are manually pointed to the dlls the folder where our shared library thingy installs itself. So far so good.
When we set up Sentry initially everything seemed to work great. We do some updates and errors seem to be going way down. That's cause we are awesome and Sentry helped us be more awesome, right? Nope! Well I mean kind of.
The scope is being disposed of so we are no longer getting Unhandled exceptions. We didn't notice at first because we are still getting sentry logs when we are handling errors through our Logging.Log() method. This logging method calls SentrySdk.Init() which I suspect is disposing the client in the executing assembly.
We also started using Sentry for some simple Usage tracking by spinning up a separate project in Sentry called Usage-Tracker and passing a simple "DoThingApplication has been launched" with an ApplicationName.UsageTracker Enum as a parameter to our Logging method.
Question: What is a good way to handle this where my setup can have a Sentry instance that wraps my using(sentryClientStuff){ ComposeObjects(); } and still have my logging method look for the existing client and use it if it exists?
Caveats:
I believe before any of this happens we still need to make a call to send a Sentry log to our UsageTracker.
I would like to pass in as few options as possible if I'm setting up the Sentry Client/Scope in our shared library. Maybe Release and Environment. Maybe check tags for Fingerprint and set it in the Log method.
I'm open to new approaches to any of this.
Some related thoughts
Maybe there is a better way to handle references that could solve both this and some other pains of when they have become mismatched between client and shared framework/library thing
Maybe the answer can be found through adding some Unit Tests but I could use a Sentry specific example or a nudge there because I don't know a muc about that.
Maybe there is a way to use my shared library to return a Sentry Client or Scope that I could use in my client assembly that would not be so fragile and the library could somehow also use it.
Maybe there is a better solution I can't conceive because I'm just kind of an OK programmer and it escapes me. I'm open to any advice/correction/ridicule.
Maybe there is a smarter way to handle "Usage-Tracker" type signals in Sentry
Really I want a cross-assembly singleton kind of thing in practice.
There are really many things going on here. Also without looking at any code it's hard to picture how things are laid out. There's a better chance you can get the answer your are looking for if you share some (dummy even) example of the structure of your project.
I'll try to break it down and address what I can anyway:
With regards to:
Usage-Tracker:
You can create a new client and bind to a scope. That way any use of the SentrySdk static class (which I assume your Logger.Log routes to) will pick up.
In other words, call SentrySdk.Init as you currently do, with the options that are shared across any application using your shared library, and after that create a client using the DSN of your Usage-Tracker project in sentry. Push a scope, bind the client and you can use SentrySdk with it.
There's an example in the GitHub repo of the SDK:
using (SentrySdk.PushScope())
{
SentrySdk.AddBreadcrumb(request.Path, "request-path");
// Change the SentryClient in case the request is to the admin part:
if (request.Path.StartsWith("/admin"))
{
// Within this scope, the _adminClient will be used instead of whatever
// client was defined before this point:
SentrySdk.BindClient(_adminClient);
}
SentrySdk.CaptureException(new Exception("Error at the admin section"));
// Else it uses the default client
_middleware?.Invoke(request);
} // Scope is disposed.
The SDK only has to be initialized once but you can always create a new client with new SentryClient, push a new scope (SentrySdk.PushScope()) and bind it to that new scope (SentrySdk.BindClient). Once you pop the scope the client is no longer accessdible via SentrySdk.CaptureException or any other method on the static class SentrySdk.
You can also use the client directly, without binding it to the scope at all.
using (var c = new SentryClient(new SentryOptions { Dsn = new Dsn("...") })) {
c.CaptureMessage("hello world!");
}
The using block is there to make sure the background thread flushes the event.
Central place to initialize the SDK:
There will be configuration which you want to have fixed in your shared framework/library but surely each application (composition root) will have its own setting. Release is auto-discovered.
From docs.sentry.io:
The SDK will firstly look at the entry assembly’s AssemblyInformationalVersionAttribute, which accepts a string as value and is often used to set a GIT commit hash.
If that returns null, it’ll look at the default AssemblyVersionAttribute which accepts the numeric version number.
If you patch your assemblies in your build server, the correct Release should be reported automatically. If not, you could define it per application by taking a delegate that passes the SentryOptions as argument.
Something like:
Framework code:
public class MyLogging
{
void Init(Action<SentryOptions> configuration)
{
var o = new SentryOptions();
// Add things that should run for all users of this library:
o.AddInAppExclude("SomePrefixTrueForAllApplications");
o.AddEventProcessor(new GeneralEventProessor());
// Give the application a chance to reconfigure anything it needs:
configuration?.Invoke(o);
}
}
App code:
void Main()
{
MyLogging.Init(o => o.Environment = "my env");
}
The scope is being disposed of so we are no longer getting Unhandled exceptions."
Not sure I understand what's going on here. Pushing and popping (disposing) scopes don't affect the ability of the SDK to capture unhandled exceptions. Could you please share a repro?
This logging method calls SentrySdk.Init() which I suspect is disposing the client in the executing assembly.:
Unless you create a client "by hand" with new SentryClient, there's only 1 client in the running process. Please note I said running process and not assembly. Instances are not held within an assembly. The assembly only contains the code that can be executed. If you call SentrySdk.CaptureException it will dispatch the call to the SentryClient bound to the current scope. If you didn't PushScope, there's always an implicit scope, the root scope. In this case it's all transparent enough you shouldn't care there's a scope in there. You also can't dispose of that scope since you never got a handle to do so (you didn't call PushScope so you didn't get what it returns to call Dispose on).
All the references in our individual applications are manually pointed to the dlls the folder where our shared library thingy installs itself.:
One thing to consider, depending on your environment is to distribute packages via NuGet. I'm unsure whether you expect to use these libraries in non .NET Framework applications (like .NET Core). But considering .NET Core 3.0 is bringing Windows Desktop framework support like WPF and WinForm, it's possible that eventually you will. If that's the case, consider targeting .NET Standard instead of .NET Framework for your code libraries.
Our applications use a lot of shared components. Some of them have no need for caching, for example, Windows Services which process unmailed emails. You'd never cache that result set...
Problem is, since our shared data layer has been modified to use SqlCacheDependency, our services which don't start SqlDependency fail on database calls where the data layer requests a SqlCacheDependency object.
Which leads to the question - is there a way for our data classes to test to see if the broker service is listening (ie: has SqlDependency.Start(connectionString) been called)?
The SqlDependency object itself has no Enabled or similar property. Is there any way short of forcing the calling app to tell the data layer that SqlCaching is in use for the data layer to determine the state?
Pretty much the answer is no. We ended up adding a config variable that if false or not present, causes the request to use SqlCacheDependency to be skipped.
SELECT * FROM sys.service_queues WHERE name LIKE 'SqlQueryNotificationService-%'
returns a 'SqlQueryNotificationService-[some guid]'
And if you look deep in the Non-public members of the SqlDependency _serverUserHash while debugging in the IDE, you'll find a collection that contains that same entry. If Microsoft would be so kind as to expose that, then yes.
In my case I have a class library which is used by some web applications. So I have no App.config. I also use the SqlCacheDependency in a static event. So I'm using a static boolean like:
if (!isCachingEnabled)
isCachingEnabled = SqlDependency.Start(builder.ProviderConnectionString);
So far is working but I'm open to suggestions when using Class Libraries.
I want to monitor the key names and values that are being stored by my application in the Enterprise Library caching mechanism.
We're using the in memory settings. Basically, I just need to figure out how to dump the keys that are currently stored.
I see that the ICacheManager returns an object that has a counter, but there doesn't appear to be a way to access the cached items unless you already know the key.
Ideas?
You are correct - Enterprise Lib does not expose the in memory Cache of the CashManager. But... there is always a work around. You can reference the downloaded sourced as a project modify the original CacheManager to expose the instance of cache which has a property called CurrentCacheState and is a mere hashtable.
Then you would do the usual foreach:
foreach(DictionaryEntry d in myExposedCacheManager.RealCache.CurrentCacheState)
{
Console.WriteLine(d.Key.ToString(), d.Value.ToString();
}