.NET Maui how to make use of IOptionsSnapshot - c#

Basically I'm trying to work around the fact that you can't really use IOptionsSnapshot in Maui since the appsettings.json file is set in stone once it's bundled in with the app.
Manually updating the IConfiguration with Configuration["key"] = myValue
require then to notify all scoped services or singletons to retrieve new istances of their IOptionsSnapshot properties.
Yep I need to update those options at runtime. (Even autofac moved from this)
So I either use ApiControllers which are transient that are locally to the app and I don't know if Maui supports them, so the requests always have the updated options.
Or I make use of transient services and resolve them manually every time I need them with
using var scope = scopeFactory.CreateScope();
var service = scope.ServiceProvider.GetRequiredService<MyTransientService>()

Ok, you need to do few things.
First, make a settings service, that stores and reads small key-value pairs:
https://stackoverflow.com/a/74402836/6643940
Now you have to make sure, that everyone is notified about changes.
In my case it is easy:
Using CommunityToolkit.Mvvm, I implement Messaging.
Setting a property sends a message, for whoever cares about those changes. If there is something running, and has subscribed for that message, it will receive it.
Otherwise I fire something, that no one listens to (and this is not bad thing).
The good thing for me is that, I don't even have this Service in the places that I want to detect a change. Everything is de-coupled.
The stuff that DOES use this service, it gets the new values anyway, and since this is singleton, you can add other properties that will be updated for everyone.
The interesting part here, is that custom code you have to write. At one place you may have BaseAddress setting of HttpClient. Good luck remembering that you have to re-construct it when changed.
People are not doing this during runtime for a reason. You will infest your code with bugs.

Related

How do I set app setting value using Azure Function?

Wondering if it's possible for an Azure Function to set a value of an app setting.
For example whilst developing locally or in production one can read custom settings by binding it to a class
builder.Services.AddOptions<SomeSettingClass>()
.Configure<IConfiguration>((settings, configuration) =>
{ configuration.GetSection(nameof(SomeSettingClass)).Bind(settings);
});
and obviously use the settings at the main function method. However, is it possible to set a value and persist it for the next run?
There many solutions to this, Azure has Azure app configuration. it basically has all the plumbing and provides the appropriate configuration sources and providers to plugin into the ConfigurationRoot. This enables you use use the IOptionsMonitor feature to get real-time updates of configuration changes. Also there should be away to programmatically update it, however I have never tried it.
However, I find that service completely overpriced and clunky for what it is.
On that note, if you are handy with a bit of code, you could create your own ConfigurationSource and ConfigurationProvider to point to anywhere you like, for instance a CosmosDb container and do exactly the same thing. You could even use a trigger and signalR to publish those changes to any listening ConfigurationSource.
There are also various ways you can update a functions or apps configuration at runtime through the CLI etc (I am not sure if there C# management libraries for this, though I guess there is somewhere). However, they will not be real-time and have no inbuilt way of notifying as far as I know.
Which leaves you with the option of just doing a real time look up on a datastore ¯\_(ツ)_/¯. This way you can update and query at your own will, and will persist naturally.

How to best deploy Sentry in cross-assembly environment?

So we built this library/framework thing full of code related to business processes and common elements that are shared across multiple applications (C#, .Net 4.7.1,WPF, MVVM). Our Logging stuff is all set up through this framework so naturally it felt like the best place for Sentry. All the references in our individual applications are manually pointed to the dlls the folder where our shared library thingy installs itself. So far so good.
When we set up Sentry initially everything seemed to work great. We do some updates and errors seem to be going way down. That's cause we are awesome and Sentry helped us be more awesome, right? Nope! Well I mean kind of.
The scope is being disposed of so we are no longer getting Unhandled exceptions. We didn't notice at first because we are still getting sentry logs when we are handling errors through our Logging.Log() method. This logging method calls SentrySdk.Init() which I suspect is disposing the client in the executing assembly.
We also started using Sentry for some simple Usage tracking by spinning up a separate project in Sentry called Usage-Tracker and passing a simple "DoThingApplication has been launched" with an ApplicationName.UsageTracker Enum as a parameter to our Logging method.
Question: What is a good way to handle this where my setup can have a Sentry instance that wraps my using(sentryClientStuff){ ComposeObjects(); } and still have my logging method look for the existing client and use it if it exists?
Caveats:
I believe before any of this happens we still need to make a call to send a Sentry log to our UsageTracker.
I would like to pass in as few options as possible if I'm setting up the Sentry Client/Scope in our shared library. Maybe Release and Environment. Maybe check tags for Fingerprint and set it in the Log method.
I'm open to new approaches to any of this.
Some related thoughts
Maybe there is a better way to handle references that could solve both this and some other pains of when they have become mismatched between client and shared framework/library thing
Maybe the answer can be found through adding some Unit Tests but I could use a Sentry specific example or a nudge there because I don't know a muc about that.
Maybe there is a way to use my shared library to return a Sentry Client or Scope that I could use in my client assembly that would not be so fragile and the library could somehow also use it.
Maybe there is a better solution I can't conceive because I'm just kind of an OK programmer and it escapes me. I'm open to any advice/correction/ridicule.
Maybe there is a smarter way to handle "Usage-Tracker" type signals in Sentry
Really I want a cross-assembly singleton kind of thing in practice.
There are really many things going on here. Also without looking at any code it's hard to picture how things are laid out. There's a better chance you can get the answer your are looking for if you share some (dummy even) example of the structure of your project.
I'll try to break it down and address what I can anyway:
With regards to:
Usage-Tracker:
You can create a new client and bind to a scope. That way any use of the SentrySdk static class (which I assume your Logger.Log routes to) will pick up.
In other words, call SentrySdk.Init as you currently do, with the options that are shared across any application using your shared library, and after that create a client using the DSN of your Usage-Tracker project in sentry. Push a scope, bind the client and you can use SentrySdk with it.
There's an example in the GitHub repo of the SDK:
using (SentrySdk.PushScope())
{
SentrySdk.AddBreadcrumb(request.Path, "request-path");
// Change the SentryClient in case the request is to the admin part:
if (request.Path.StartsWith("/admin"))
{
// Within this scope, the _adminClient will be used instead of whatever
// client was defined before this point:
SentrySdk.BindClient(_adminClient);
}
SentrySdk.CaptureException(new Exception("Error at the admin section"));
// Else it uses the default client
_middleware?.Invoke(request);
} // Scope is disposed.
The SDK only has to be initialized once but you can always create a new client with new SentryClient, push a new scope (SentrySdk.PushScope()) and bind it to that new scope (SentrySdk.BindClient). Once you pop the scope the client is no longer accessdible via SentrySdk.CaptureException or any other method on the static class SentrySdk.
You can also use the client directly, without binding it to the scope at all.
using (var c = new SentryClient(new SentryOptions { Dsn = new Dsn("...") })) {
c.CaptureMessage("hello world!");
}
The using block is there to make sure the background thread flushes the event.
Central place to initialize the SDK:
There will be configuration which you want to have fixed in your shared framework/library but surely each application (composition root) will have its own setting. Release is auto-discovered.
From docs.sentry.io:
The SDK will firstly look at the entry assembly’s AssemblyInformationalVersionAttribute, which accepts a string as value and is often used to set a GIT commit hash.
If that returns null, it’ll look at the default AssemblyVersionAttribute which accepts the numeric version number.
If you patch your assemblies in your build server, the correct Release should be reported automatically. If not, you could define it per application by taking a delegate that passes the SentryOptions as argument.
Something like:
Framework code:
public class MyLogging
{
void Init(Action<SentryOptions> configuration)
{
var o = new SentryOptions();
// Add things that should run for all users of this library:
o.AddInAppExclude("SomePrefixTrueForAllApplications");
o.AddEventProcessor(new GeneralEventProessor());
// Give the application a chance to reconfigure anything it needs:
configuration?.Invoke(o);
}
}
App code:
void Main()
{
MyLogging.Init(o => o.Environment = "my env");
}
The scope is being disposed of so we are no longer getting Unhandled exceptions."
Not sure I understand what's going on here. Pushing and popping (disposing) scopes don't affect the ability of the SDK to capture unhandled exceptions. Could you please share a repro?
This logging method calls SentrySdk.Init() which I suspect is disposing the client in the executing assembly.:
Unless you create a client "by hand" with new SentryClient, there's only 1 client in the running process. Please note I said running process and not assembly. Instances are not held within an assembly. The assembly only contains the code that can be executed. If you call SentrySdk.CaptureException it will dispatch the call to the SentryClient bound to the current scope. If you didn't PushScope, there's always an implicit scope, the root scope. In this case it's all transparent enough you shouldn't care there's a scope in there. You also can't dispose of that scope since you never got a handle to do so (you didn't call PushScope so you didn't get what it returns to call Dispose on).
All the references in our individual applications are manually pointed to the dlls the folder where our shared library thingy installs itself.:
One thing to consider, depending on your environment is to distribute packages via NuGet. I'm unsure whether you expect to use these libraries in non .NET Framework applications (like .NET Core). But considering .NET Core 3.0 is bringing Windows Desktop framework support like WPF and WinForm, it's possible that eventually you will. If that's the case, consider targeting .NET Standard instead of .NET Framework for your code libraries.

Using multiple connection strings with Rebus & AzureServiceBus

We have been using Rebus to send commands to Azure Service Bus. We have a project that spans environments and needs to send commands to two different ASB namespaces (different connection strings).
The way we currently register Rebus doesn't allow us to create a factory or use multiple namespaces (that I'm aware of).
Inside Startup.cs ConfigureServices(...) method:
services.AddRebus(config =>
{
var asbConfig = Configuration.GetSection("AzureServiceBusConfiguration").Get<AzureServiceBusConfiguration>();
config
.Logging(l => l.Serilog(Log.Logger))
.Transport(t => t.UseAzureServiceBusAsOneWayClient(asbConfig.ConnectionString))
.Routing(r => r.TypeBased().Map<MyCommand>($"{asbConfig.Environment}/myQueueName"));
return config;
});
I've tried attacking this from several different directions, and all have fallen short. Is there a supported way to register more than one IBus configuration with different connection strings?
We basically need to spin this up per request scope so we can configure Rebus based on a request header value. Not sure where to start with this.
While Rebus has pretty good support for inserting itself into an IoC container via the "container adapter" concept, it doesn't necessarily make sense to always make it do so automatically.
In this case, I suggest you wrap one-way clients a dedicated class, e.g. something like a CommandSender or something, and then the command sender can initialize its one-way client in the constructor (and dispose it again in its Dispose method).
One-way clients are fairly inexpensive to create, so it might be ok to simply create/dispose every time you need them. If you need them often though, I suggest you use a ConcurrentDictionary to store the initialized instances – just remember to dispose them all when your application shuts down.

Switching of dependency injection bindings at startup based on input - good or bad practice?

I'm writing a console application for .Net Core, and using the Microsoft.Extensions.DependencyInjection package.
The console app will take switches at the command line which will change its behaviour, but the intent remains the same each time:
Take data from a database,
massage said data into a common format,
output some kind of report,
and send it to some destination.
I'm considering doing something like this:
string actionSwitch = "a"; // this would come from the command line
var serviceCollection = new ServiceCollection();
switch (actionSwitch)
{
case "a": // set up bindings for application mode a
serviceCollection.AddSingleton<IDatabaseReaderService, ReadFromMySqlService>();
serviceCollection.AddSingleton<IReportGeneratorService, HtmlReportGeneratorService>();
serviceCollection.AddSingleton<IReportOutputService, OutputReportToDiskService>();
break;
case "b": // set up bindings for application mode b
serviceCollection.AddSingleton<IDatabaseReaderService, ReadFromXmlFileService>();
serviceCollection.AddSingleton<IReportGeneratorService, PdfReportGeneratorService>();
serviceCollection.AddSingleton<IReportOutputService, OutputReportToFtpService>();
break;
}
serviceCollection.AddSingleton<IReportProcessService, ReportProcessService>();
var serviceProvider = serviceCollection.BuildServiceProvider();
var process = serviceProvider.GetService<IReportProcessService>();
process.Execute();
ie. the bindings are configured based on the user's input at the command line.
Having only recently started to use DI, all the examples I've seen follow the same pattern:
Declare the bindings in a Startup or Initialisation class.
Leave them alone forever after.
Does the above code represent reasonable use of DI, or is it bad practice to use application logic to select bindings at startup?
Your question title is a bit misleading. When we talk about 'runtime' in the context of DI, we are typically referring to everything that happens after the bindings have been configured. What you are doing is not to be considered to be runtime, but rather startup-time or configuration-time (not to confuse with compile-time btw).
Whether decisions on how to wire your dependencies comes from a config file, command line argument or a database is irrelevant here. As long as they are all constants that are known at startup-time, what you're doing is completely fine, sane and actually is a good practice.
Things change however when you are actually trying to change your bindings at runtime, i.e. changing the container while the application is running. That would be considered to be a bad practice.
There are many reasons for this to be bad practice, and lots has been written about this, such as here and here and this is the main reason why most DI Containers in the .NET space are now moving into an immutable model (1, 2, 3).
In case different components need to be called due to variables that actually change during runtime (opposed to values that are constant after startup), the advice is to use adapter and proxy classes that hide the fact that dispatching takes place at runtime. For an example, read this.
TLDR;
Changing bindings during the application's lifetime: bad.
Configuring the container up-front once: good.
Using proxies and adapters to change components being used at runtime: good.

Need help avoiding the use of a Singleton

I'm not a hater of singletons, but I know they get abused and for that reason I want to learn to avoid using them when not needed.
I'm developing an application to be cross platform (Windows XP/Vista/7, Windows Mobile 6.x, Windows CE5, Windows CE6). As part of the process I am re-factoring out code into separate projects, to reduce code duplication, and hence a chance to fix the mistakes of the inital system.
One such part of the application that is being made separate is quite simple, its a profile manager. This project is responsible for storing Profiles. It has a Profile class that contains some configuration data that is used by all parts of the application. It has a ProfileManager class which contains Profiles. The ProfileManager will read/save Profiles as separate XML files on the harddrive, and allow the application to retrieve and set the "active" Profile. Simple.
On the first internal build, the GUI was the anti-pattern SmartGUI. It was a WinForms implementation without MVC/MVP done because we wanted it working sooner rather than being well engineered. This lead to ProfileManager being a singleton. This was so from anywhere in the application, the GUI could access the active Profile.
This meant I could just go ProfileManager.Instance.ActiveProfile to retrieve the configuration for different parts of the system as needed. Each GUI could also make changes to the profile, so each GUI had a save button, so they all had access to ProfileManager.Instance.SaveActiveProfile() method as well.
I see nothing wrong in using the singleton here, and because I see nothing wrong in it yet know singletons aren't ideal. Is there a better way this should be handled? Should an instance of ProfileManager be passed into every Controller/Presenter? When the ProfileManager is created, should other core components be made and register to events when profiles are changed. The example is quite simple, and probably a common feature in many systems so think this is a great place to learn how to avoid singletons.
P.s. I'm having to build the application against Compact Framework 3.5, which does limit alot of the normal .Net Framework classes which can be used.
One of the reasons singletons are maligned is that they often act as a container for global, shared, and sometimes mutable, state. Singletons are a great abstraction when your application really does need access to global, shared state: your mobile app that needs to access the microphone or audio playback needs to coordinate this, as there's only one set of speakers, for instance.
In the case of your application, you have a single, "active" profile, that different parts of the application need to be able to modify. I think you need to decide whether or not the user's profile truly fits into this abstraction. Given that the manifestation of a profile is a single XML file on disk, I think it's fine to have as a singleton.
I do think you should either use dependency injection or a factory pattern to get a hold of a profile manager, though. You only need to write a unit test for a class that requires the use of a profile to understand the need for this; you want to be able to pass in a programatically created profile at runtime, otherwise your code will have a tightly coupled dependency to some XML file on disk somewhere.
One thing to consider is to have an interface for your ProfileManager, and pass an instance of that to the constructor of each view (or anything) that uses it. This way, you can easily have a singleton, or an instance per thread / user / etc, or have an implementation that goes to a database / web service / etc.
Another option would be to have all the things that use the ProfileManager call a factory instead of accessing it directly. Then that factory could return an instance, again it could be a singleton or not (go to database or file or web service, etc, etc) and most of your code doesn't need to know.
Doesn't answer your direct question, but it does make the impact of a change in the future close to zero.
"Singletons" are really only bad if they're essentially used to replace "global" variables. In this case, and if that's what it's being used for, it's not necessarily Singleton anyway.
In the case you describe, it's fine, and in fact ideal so that your application can be sure that the Profile Manager is available to everyone that needs it, and that no other part of the application can instantiate an extra one that will conflict with the existing one. This reduces ugly extra parameters/fields everywhere too, where you're attempting to pass around the one instance, and then maintaining extra unnecessary references to it. As long as it's forced into one and only one instantiation, I see nothing wrong with it.
Singleton was designed to avoid multiple instantiations and single point of "entry". If that's what you want, then that's the way to go. Just make sure it's well documented.

Categories

Resources