I'm using an Azure Durable Function to orchestrate other functions, currently contained in the same project. I want to configure services and logging for those orchestrated functions. How can I do that?
Here is some more detail:
In a "normal" Azure Function I have a Program.cs and a Main method with the following code that sets up the environment for the function execution:
var host = new HostBuilder()
.ConfigureFunctionsWorkerDefaults()
.ConfigureLogging(loggingBuilder => { loggingBuilder.SetMinimumLevel(LogLevel.Trace); })... etc. pp.
Using the HostBuilder I can add additional logging providers, add caching services etc. Those services are then injected via the constructor of the Azure Function.
Now in comparison when creating a Durable Function project via the VS Code "Durable Function Orchestration" template there is no Program.cs, no HostBuilder and no constructor. There are just some static methods representing the orchestrator and an orchestrated function.
As there is no out-of-the-box HostBuilder in the "Durable Function Orchestration" template - how does the HostBuilder equivalent look like for Durable Functions? Whats the pattern or convention here? Do I write it myself? Or is there some instance floating around or initialization I can hook into? Or should orchestrated functions be put into separate Azure Function projects where I can make use of the HostBuilder?
Any hints are appreciated.
By default the ILogger instance is injected in your functions, unless you are using DI.All you need to do is Use the ILogger.
[FunctionName("funcname")]
public async static Task RunOrchestrator(
[OrchestrationTrigger] DurableOrchestrationContext context,
ILogger log)
{
log.LogInformation("Starting Orchestration");
}
Check
Incase if you using Dependency injection you should just do the below in your startup builder.Services.AddLogging();
Also check
So the solution is to use a FunctionsStartup class as outlined here. This should make dependency injection work, also for Durable Functions.
For me it did not work immediately though and it took a while to figure out why. What I tried is adding an additional parameter (myService) to the static methods like so:
[FunctionName("DurableFunctionsOrchestrationCSharp1_Hello")]
public static string SayHello([ActivityTrigger] string name, ILogger log, IMyService myService)
{
log.LogInformation($"Saying hello to {name}.");
return $"Hello {name}!";
}
I also added the startup class according to the documentation that is supposed to provide the implementation for IMyService.
This did never work. The error I got is this:
Microsoft.Azure.WebJobs.Host: Error indexing method
'DurableFunctionsOrchestrationCSharp1_Hello'.
Microsoft.Azure.WebJobs.Host: Cannot bind parameter 'myService' to
type IMyService. Make sure the parameter Type is supported by the
binding. If you're using binding extensions (e.g. Azure Storage,
ServiceBus, Timers, etc.) make sure you've called the registration
method for the extension(s) in your startup code (e.g.
builder.AddAzureStorage(), builder.AddServiceBus(),
builder.AddTimers(), etc.).
This error message suggests that it should work while in reality it never does.
The solution was to get rid of the static methods and use classes with constructors. Constructor injection WORKS.
The working class looks like this:
public class Activities
{
IMyService _service;
public Activities(IMyService service)
{
_service = service;
}
[FunctionName("DurableFunctionsOrchestrationCSharp1_Hello")]
public string SayHello([ActivityTrigger] string name, ILogger log)
{
log.LogInformation($"Saying hello to {name} {_service.GetType()}.");
return $"Hello {name}!";
}
}
Note that I moved the function here and made it non-static.
The constructor is properly invoked, given a IMyService instance created by the Startup class and then the function is executed.
The minimal startup class I used for testing looks like this:
using Microsoft.Azure.Functions.Extensions.DependencyInjection;
using Microsoft.Extensions.DependencyInjection;
[assembly: FunctionsStartup(typeof(MyNamespace.Startup))]
namespace MyNamespace
{
public interface IMyService
{
}
public class MyService : IMyService
{
}
public class Startup : FunctionsStartup
{
public override void Configure(IFunctionsHostBuilder builder)
{
builder.Services.AddSingleton<IMyService>((s) => {
return new MyService();
});
}
}
}
So dependency injection works for Durable Functions, if you are injecting into constructors.
Related
I'm building an application that performs actions initiated by a user and one particular class has dependencies on things I can wire up in DI such as an ILogger instance as well as an HttpClient in addition to runtime arguments that identify the user and the instance of the action (mostly to be used while logging to help with debugging).
The trouble I have is that I'm not entirely sure how to inject this class into the other classes that need it as a result of the runtime dependencies.
Here's a simplified example of one of my classes:
public class Dependency : IDependency
{
private readonly HttpClient httpClient;
private readonly ILogger<Dependency> logger;
private readonly RuntimeDeps runtimeDeps
public Dependency(
ILogger<Dependency> logger,
HttpClient httpClient,
RuntimeDeps runtimeDeps)
{
// set private fields
}
public Result DoStuff()
{
// use Http client to talk to external API
// something fails so log the failure and some helpful info
logger.log($"{runtimeDeps.InstanceId} failed. " +
"Initiated by {runtimeDeps.UserName}");
}
}
This feels like it requires a factory to create but then is it best to request the HttpClient and Logger in the factory method or declare it as a dependency of the factory? If the latter, I presume the factory would have to be registered as a transient or as a scoped resource since registering it as a singleton would result in a captive dependency (I think).
Any suggestions on redesigns are also welcome if this is a symptom of a poor design. I'd love to implement Mark Seeman's Pure DI to get some more assistance from the compiler but I don't know if that's possible in Azure functions.
A transient factory with the transient dependencies injected into the constructor and the runtime dependencies as parameters of the Create method will work fine.
DI is baked into the Azure Functions library in the sense that parameters are injected into the trigger methods, but beyond these you should be able to use Pure DI to manage your own dependencies by calling into some composition root helper class from the trigger function which knows how to build your dependency graph in a pure manner.
Instead of requiring runtime data during the construction of a component, it's better to let runtime data flow through method calls on an initialized object graph by either:
passing runtime data through method calls of the API or
retrieving runtime data from specific abstractions that allow resolving runtime data.
I formalized this in 2015 in this blog post, which I referred to in the comments.
After reading your additional comments, I came to the conclusion that in your case option 2 is most suited, as the data you are sending is likely an implementation detail to the component, and should not be part of the public API.
In that case, you can redesign your component as follows:
public class Dependency : IDependency
{
public Dependency(
ILogger<Dependency> logger,
HttpClient httpClient,
IRuntimeDepsProvider provider) ...
public Result DoStuff()
{
// use Http client to talk to external API
// something fails so log the failure and some helpful info
logger.log($"{provider.InstanceId} failed. " +
$"Initiated by {provider.UserName}");
}
}
IRuntimeDepsProvider is an abstraction that hides the retrieval and storage of runtime data. This gives you the ability to postpone the decision to either use a Closure Composition Model or an Ambient Composition Model until the Last Responsible Moment.
Using the IRuntimeDepsProvider abstraction, you can chose to set the incoming runtime values after the object graph is constructed. For instance:
public class MyFunction
{
// Notice the different abstraction here
public MyFunction(
IRuntimeDepsInitializer initializer,
IHandler<Something> handler) ...
public void TheFunction(Guid instanceId, string userName, Something cmd)
{
// Setting the runtime data *after* the object graph is constructed,
initializer.SetData(instanceId, userName);
// but before the graph's public methods are invoked.
handler.Handle(cmd);
}
}
Here, a second abstraction is introduced, namely IRuntimeDepsInitializer. Now you can have one class implementing both interfaces:
public class RuntimeDepsStorage : IRuntimeDepsInitializer, IRuntimeDepsProvider
{
public Guid InstanceId { get; private set; }
public string UserName { get; private set; }
public void SetData(Guid id, string name)
{
InstanceId = id;
UserName = name;
}
}
TIP: Instead of using two interfaces, you can also use only IRuntimeDepsProvider and let MyFunction depend on the concrete RuntimeDepsStorage. Which solution is best depends on the context.
Now the main trick here is to make sure that RuntimeDepsStorage becomes a Scoped dependency, because you want to reuse it throughout a request, but not shared by multiple requests.
When applying Pure DI, this would look like this:
var storage = new RuntimeDepsStorage();
new MyFuncion(
initializer: storage,
handler: new SomethingHandler(
stuffDoer: new Dependency(
httpClient: client, // Did you notice this is a runtime dep as well?
logger: new Logger<Dependency>(),
provider: storage)))
If, on the other hand, you would be using MS.DI as your DI Container, registration would be similar to the following:
services.AddScoped(_ => new RuntimeDepsStorage());
services.AddScoped<IRuntimeDepsProvider>(
c => c.GetRequiredService<RuntimeDepsStorage>());
services.AddScoped<IRuntimeDepsInitializer>(
c => c.GetRequiredService<RuntimeDepsStorage>());
// etc, your usual registrations here
What is the benefit of using services.AddSingleton<SomeService, SomeServiceImplementation>() instead of services.AddSingleton<SomeServiceImplementation>() ?
For example i've got an sample Interface
interface ISampleInterface
{
void DoSomething();
}
And a Sample-Class:
class SampleClass : ISampleInterface
{
public void DoSomething()
{
console.write("hi");
}
}
No i do services.AddSingleton<SampleClass>()
Why or when to use services.AddSingleton<ISampleInterface, SampleClass>() ?
Thanks for your help! :-)
services.AddSingleton<SampleInterface, SampleClass>() allows you to register different implementations for the same interface without modifying the rest of your code.
Change implementations with minimal effort
Suppose you have an ILogger interface and implementation that log eg to the browser's console or send the log entry to different services eg ConsoleLogger, MyServiceLogger or PrometheusLogger. If you registered only the implementation, with eg services.AddSingleton<ConsoleLogger>() you'd have to change all of your classes each time you changed a logger implementation.
You'd have to go to each page and change
#inject ConsoleLogger logger;
to
#inject MyServiceLogger logger;
Forget about specifying the logger at runtime too. You'd have to deploy the application each time you wanted to use a new logging service.
By registering the interface and a specific implementation, all of your classes can keep using ILogger<T> and never know that the implementation has changed.
Implementation selection at runtime
You could even change the implementation at runtime, based on environment variables, configuration, or any other logic you want, eg :
if (app.IsDevelopment)
{
services.AddSingleton<ILogger,ConsoleLogger>();
}
else
{
services.AddSingleton<ILogger,MyServiceLogger>();
}
Unit Testing
In unit tests you could use a null logger - in fact the Logging middleware has a NullLogger class just for this reason, in the core Abstractions package.
Or you could wrap your test framework's output methods into an ILogger implementation and use that, without modifying the code. xUnit for example uses the ITestOutputHelper interface for this. You could create an XUnitlogger that forwards calls to this interface:
public class XUnitLogger:ILogger
{
private readonly ITestOutputHelper _output;
public XUnitLogger(ITestOutputHelper output)
{
_output=output;
}
...
void Log(...)
{
_output.WriteLine(...);
}
}
I'm not sure what is the best way to achieve what I am trying to accomplish so let me give you an example.
I am using Azure Functions, which are stateless, with the following signature.
public static Task Run(Message message, ILogger logger)
{
var controller = Main.Container.GetInstance<ConsumerController>();
// How can I attach the passed in logger instance so the rest of
// the services for the current flow re-use this instance?
return controller.Execute(message);
}
As you can see, the azure function framework passes me an instance of the ILogger already configured and initialized for this function call only.
I read through the documentation and I think I need a new scope here but I'm not sure. I only want this ILogger instance to be used during the async execution of this one method call. Each function call will use their own.
And just to be clear, the controller is only one of possibly many services (services, repositories, request handlers) involved in the execution of the task.
Any help would be great?
You can do the following:
Create a Proxy (e.g. ProxyLogger) implementation that implements ILogger, contains a ILogger Logger property, and forwards any call to that property.
Register that Proxy both as ILogger and ProxyLogger as Lifestyle.Scoped.
Resolve ProxyLogger within your function.
Set ProxyLogger.Logger using the function's supplied ILogger.
Resolve the root object and use it.
Create a Proxy:
public class ProxyLogger : ILogger
{
public ILogger Logger { get; set; }
public void Log<TState>(LogLevel l, EventId id, TState s, Exception ex,
Func<TState,Exception,String> f) =>
this.Logger.Log<TState>(l, id, s, ex, f);
// Implement other functions
}
Register that Proxy:
container.Register<ProxyLogger>(Lifestyle.Scoped);
container.Register<ILogger, ProxyLogger>(Lifestyle.Scoped);
Resolve ProxyLogger within your function, set ProxyLogger.Logger using the function's supplied ILogger, and resolve the root object and use it.
public static Task Run(Message message, ILogger logger)
{
using (AsyncScopedLifestyle.BeginScope(Main.Container)
{
Main.Container.GetInstance<ProxyLogger>().Logger = logger;
var controller = Main.Container.GetInstance<ConsumerController>();
return controller.Execute(message);
}
}
I do think, however, that this model leads to a very large amount of infrastructural code. Preferably you wish to keep this infrastructure to the absolute minimum. Instead, you could try keeping your Azure Functions small Humble Objects, as described here. That might not completely solve your initial problem, but you might not need to have a method specific logger anyway. But if, on the other hand, you need that, you might be able to mix that Humble Object approach with the use of C#'s CallerMemberName attribute and the ProxyLogger approach. You don't really need the Azure injected ILogger to do that for you.
I want to support logging in my .NETStandard project that will be consumed by a .NET Core Console or Web Client. However I don't want to presume the client uses a constructor that requires a ILogger dependency in the constructor of the classes I wish to log from.
If the logger does not exist, I basically don't want to fail because of this.
So my question is how can I reference ILogger in my code without passing it to the constructor?
using Microsoft.Extensions.Logging;
namespace MyApp
{
public class MyClass
{
//slf4net logger implementation
private static readonly slf4net.ILogger _slf4netLogger = slf4net.LoggerFactory.GetLogger(typeof(MyClass));
//Microsoft.Extensions.Logging???
private static readonly ILogger<MyClass> _logger = ???
public MyClass()
{
//Constructor empty
}
public void MyMethod()
{
//slf4net logger works like this
_slf4netLogger.Trace("This got logged");
//this won't work because the logger was never passed from the constructor
_logger.LogInformation("A message for the log if one is listening");
}
}
}
references:
https://github.com/ef-labs/slf4net
https://learn.microsoft.com/en-us/aspnet/core/fundamentals/logging?tabs=aspnetcore2x
It seems like I'm not alone with my frustration here
Accessing the Logging API Outside of a MVC Controller
OK, so this is where the new logging API quickly becomes a nightmare.
- https://stackify.com/net-core-loggerfactory-use-correctly/
Is there any way for the default Functions class that comes in WebJob projects to be internal? We are using a job activator to inject via Unity some dependencies that are internal, which requires that the Functions class also be internal. When running the web job, we are seeing the following error:
No job functions found. Try making your job classes and methods public. If you're using binding extensions (e.g. ServiceBus, Timers, etc.) make sure you've called the registration method for the extension(s) in your startup code (e.g. config.UseServiceBus(), config.UseTimers(), etc.).
When we make all the dependencies public, it works fine, so I know there's nothing wrong with my triggers or my job host config.
Here's my Program class:
class Program
{
static void Main()
{
var config = new JobHostConfiguration
{
JobActivator = new Activator(new UnityContainer())
};
config.UseServiceBus();
var host = new JobHost(config);
host.RunAndBlock();
}
}
Here's a simplified version of my Functions class:
internal class Functions
{
private readonly IMyInternalDependency _dependency;
public Functions(IMyInternalDependency dependency)
{
_dependency = dependency;
}
public function DoSomething([ServiceBusTrigger("my-queue")] BrokeredMessage message)
{
// Do something with the message
}
}
You must make the Functions class public. That appears to be just how Azure WebJobs works. You don't need to expose your concrete internal classes publicly. Just the interfaces:
public interface IDoStuffPublically
{
void DoSomething();
}
interface IDoStuffInternally
{
void DoSomething();
void DoSomethingInternally();
}
class DoStuff : IDoStuffPublically, IDoStuffInternally
{
public void DoSomething()
{
// ...
}
public void DoSomethingInternally()
{
// ...
}
}
And then your Functions class:
public class Functions
{
public Functions(IDoStuffPublically stuff)
{
_stuff = stuff;
}
private IDoStuffPublically _stuff;
// ...
}
And Unity will do something like this:
var job = new Functions(new DoStuff());
Dave commented:
It's frustrating that I cannot simply set the internals visible to the WebJob SDK...
You might be able to accomplish this... miiiiiiiiiight be able to...
There is a way for an assembly or executable to grant another assembly the permission to access internal members. I've done this before on a class library to allow my unit tests to call internal methods on a class as part of setting up a unit test.
If you know which assembly in Azure WebJobs actually creates the instance of your Functions class, and the assembly that invokes the methods on that class, you could white list those assemblies.
Crack open AssemblyInfo.cs and add one or more lines:
[assembly: InternalsVisibleTo("Microsoft.Azure.Something.Something")]
Reference: InternalsVisibleToAttribute class
Related reading: .Net Tips – using InternalsVisibleTo attribute to help testing non-public methods
I'm not sure which assemblies you would need to add, though.
When using Triggers with the Webjob SDK, you never register the functions to be executed.
When the jobhost starts (new JobHost(config).RunAndBlock()), it discoverers the functions to be executed based on parameter attributes.
Let's have a look at your code:
var config = new JobHostConfiguration
{
JobActivator = new Activator(new UnityContainer())
};
config.UseServiceBus();
Because you specify that you want to use servicebus, when the jobhost starts, it will discover and register (index) all the functions that have a parameter with the ServiceBusTrigger attribute.
I assume that the SDK uses something like MemberInfo.GetCustomAttributes to index the functions so don't know if it is (possible and) easy to get attributes from an internal class.