Azure Functions DbContext request delays - Connect to database at startup instead? - c#

I have an Azure Functions app where I'm trying to eliminate delays in requests as much as possible. To combat cold start times, we've upgraded our Azure Functions plan to ensure we generally have one or more pre-warmed instances ready to go.
However, even with a pre-warmed instance, the very first HttpTrigger call to a newly launched function has a delay because it needs to establish a connection to the database. It appears a database connections is not established until the DataContext is instantiated, which in turn, doesn't happen until it is needed by an HttpTrigger. After that first request to the database, everything is quite performant.
I'm using Dependency Injection to create a DbContextPool in my FunctionsStartup class:
services.AddDbContextPool<DataContext>(options => {
options.UseSqlServer(connectionString);
});
I understand that establishing a database connection is going to naturally take a little bit of time, but is there any way to get Azure Functions to get its connection pool going at startup rather than waiting until the first HttpTrigger to instantiate my DbContext and connect to the database?

I was able to figure out a solution thank to this answer. The great thing about this solution is that it not only works for pre-warming database/DbContext connections, but can be used to pre-warm all sorts of connections (e.g., Storage Accounts, Key Vault access, etc.).
using System;
using System.Threading.Tasks;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Host.Config;
using Microsoft.Azure.WebJobs.Hosting;
using Microsoft.Extensions.DependencyInjection;
[assembly: WebJobsStartup(typeof(MyCompany.MyProduct.MyFunctionAppInitializer), "MyFunctionAppInitializer")]
namespace MyCompany.MyProduct;
public class MyFunctionAppInitializer : IWebJobsStartup
{
public void Configure(IWebJobsBuilder builder)
{
builder.AddExtension<MyFunctionAppInitializerConfigProvider>();
}
}
internal class MyFunctionAppInitializerConfigProvider : IExtensionConfigProvider
{
private readonly IServiceScopeFactory scopeFactory;
public MyFunctionAppInitializerConfigProvider(IServiceScopeFactory scopeFactory)
{
this.scopeFactory = scopeFactory;
}
public void Initialize(ExtensionConfigContext context)
{
using IServiceScope scope = scopeFactory.CreateScope();
Task preWarmTask = PreWarmConnections(scope.ServiceProvider);
preWarmTask.Wait();
}
private static async Task PreWarmConnections(IServiceProvider serviceProvider)
{
// Connect to Database
var dbContext = serviceProvider.GetService<MyDbContext>();
await dbContext.PingDatabase();
// Connect to Storage Access
await MyStorageAccess.PingStorageAccess();
// Connect to Signing Key Vault
MyAuthorization.InitializeCryptoClient();
}
}
As a bit of background: if you're like me and first tried putting a DbContext instantiation/connection right in the Startup class, you'll quickly discover you get all sorts of crashing errors (usually logging-related) when you try to deploy this to your Function environment. This is because the Function App is not (yet) fully initialized in the Configure() method of that class.
Using a WebJobStartup call, however, seems to occur after everything is neatly initialized, but still upon launch of the instance. In practice, this has allowed me to get all the connections going I needed such that those requests made to my Function app are decently snappy.

Related

Need to create service at runtime using ServiceProvider in an Azure Function

I have a tricky requirement where I need create copy of a service that has been created via Constructor DI in my Azure Function
public MyFunction(IMyService myService,
IServiceProvider serviceProvider,
ServiceCollectionContainer serviceCollectionContainer)
{
_myService = tmToolsService;
_serviceProvider = serviceProvider;
_serviceCollectionContainer = serviceCollectionContainer;
}
[FunctionName("diagnostic-orchestration")]
public async Task DiagnosticOrchestrationAsync(
[OrchestrationTrigger] IDurableOrchestrationContext context)
{
}
This service has a lot of dependencies so I dont really want go down the manual Activator.CreateInstance route
I have tried 2 different approaches
Approach 1
I have ServiceCollectionContainer. This is filled in Configure of the startup and simply holds the services
public override void Configure(IFunctionsHostBuilder builder)
{
base.Configure(builder);
var services = builder.Services;
services.AddSingleton(s => new ServiceCollectionContainer(services));
}
In my function I call
var provider = _serviceCollectionContainer.ServiceCollection.BuildServiceProvider();
if (provider.GetService<IMyService>() is IMyService myService)
{
await myService.MyMathodAsync();
}
This throws the error
System.InvalidOperationException: 'Unable to resolve service for type
'Microsoft.Azure.WebJobs.Script.IEnvironment' while attempting to activate
'Microsoft.Azure.WebJobs.Script.Configuration.ScriptJobHostOptionsSetup'.'
I believe this could be because although the service collection looks fine (276 registered services) I have seen references online that say that Configure may be unreliable
Approach 2
The second approach is the more conventional one, I just tried to use the service provider injected without making any changes
if (_serviceProvider.GetService<IMyService>() is IMyService myService)
{
await myService.MyMathodAsync();
}
But if I use this approach I get the error
'Scope disposed{no name} is disposed and scoped instances are disposed and no longer availab
How can I fix this?
I have large date range of data that I am processing. I need to split my date range and use my service to process each date range. My service has repositories. Each repository has a DbContext. Having each segment of dates run in the context of its own service allows me to run the processing in parallel without having DbContext queries being run in parallel which causes issues with Ef Core
This processing is running inside a durable function
I don't know if this holds true for Azure Functions and moreover I am not experienced with durable ones, though as it seems that the main goal is to run parallel queries via ef core through your IMyService then you could in the constructor:
public MyFunction(IServiceScopeFactory serviceScopeFactory)
{
_serviceScopeFactory = serviceScopeFactory;
}
And then in the function call, assuming you have an IEnumerable "yourSegments" of the things you want to process in parallel:
var tasks = yourSegments.Select(async segment =>
{
using (var scope = _serviceScopeFactory.CreateScope())
{
var IMyService = scope.ServiceProvider.GetRequiredService<IMyService>();
await IMyService.MyMathodAsync(segment);
}
});
await Task.WhenAll(tasks);
I got this from a nice blog post that explains "Since we project our parameters to multiple tasks, each will have it's own scope which can resolve it's own DbContext instance."
You can create a 1:1 copy by using this extension method.
It is a large function, to large for SO, so I've put a pastebin here.
https://pastebin.com/1dKu01w9
Just call _myService.DeepCopyByExpressionTree(); within your constructor.

How do you send out from SignalR Hub from another class in the same project?

Background: I'm on .Net 6 running the latest and greatest from the built-in SignalR package. I installed #microsoft/signalr and the JS side of things works fine for Client -> Server communications. The issue is my Server -> Hub communications.
I have a class that after updating some information and needs to broadcast out to whomever is listening that "this object was updated". Below is what I'm talking about.
public class SignalRRunner
{
public SignalRRunner(ICompanyDIContainer companyContainer)
: base(companyContainer)
{
}
public Task RunItAsync(Signal signal)
{
if (signal.userId.HasValue)
{
// Do work on the thing, update the db, etc. here
await ChatHub.Static_Send("debug", "users", "accounts", userObject);
return Task.FromResult(true);
}
return Task.FromResult(false);
}
}
In my hub:
public static async Task Static_Send(string group, string whoUpdate, string whatUpdate, object payload)
{
if (string.IsNullOrWhiteSpace(group))
{
group = "debug";
}
await Clients.Group(group).SendAsync("OnDebug", payload, new CancellationToken());
}
Due to limitations imposed from my company, I cannot directly inject the IHubContext into the constructor of anything as they use their own version of DI in the project and it always throws an exception when I've tried. I've tried making the function non-static, registering the ChatHub in startup.cs, and resolving it in the class that's doing the work, but the Clients are null and this throws an error. Every other solution I've read suggests using the GlobalHost.ConnectionManager.GetHubContext to get the HubContext from inside the static method, but that's no longer part of SignalR so that's out. How do I send messages to the ChatHub from another class inside the same project?
I figured out how to accomplish this. What I did was add the SignalR client package to my project and created a hub connection. I added it to the IoC container we use as a singleton and just resolve it wherever I need.

Entity Framework query throws 'async error' after many requests

In my project using .NET framework 4.6.1, EF 6.1.4 and IdentityServer3, I set the following DbContext:
public class ValueContext : DbContext
{
public IValueContext(bool lazyLoadingEnabled = false) : base("MyConnectionString")
{
Database.SetInitializer<IValueContext>(null);
Configuration.LazyLoadingEnabled = lazyLoadingEnabled;
}
public DbSet<NetworkUser> NetworkUser { get; set; }
public DbSet<User> User { get; set; }
[...]
And my Entity model User:
[Table("shared.tb_usuarios")]
public class NetworkUser
{
[Column("id")]
[Key()]
public int Id { get; set; }
[Required]
[StringLength(255)]
[Column("email")]
public string Email { get; set; }
[...]
public virtual Office Office { get; set; }
[...]
So far I think its all good.
Then I set this following query in my UserRepository (using DI)
protected readonly ValueContext Db;
public RepositoryBase(ValueContext db)
{
Db = db;
}
public async Task<ImobUser> GetUser(string email)
{
//sometimes I get some error here
return await Db.User.AsNoTracking()
.Include(im => im.Office)
.Include(off => off.Office.Agency)
.Where(u => u.Email == email &&
u.Office.Agency.Active)
.FirstOrDefaultAsync();
}
And everything runs well, until it starts to get many sequential requests, then I begin to get these type of errors, randomly in any function that uses my ValueContext as data source:
System.NotSupportedException: 'A second operation started on this context before a previous asynchronous operation completed. Use 'await' to ensure that any asynchronous operations have completed before calling another method on this context. Any instance members are not guaranteed to be thread safe.'
This is my last hope, as I tried a bunch of different things. Some of them work, and some dont, like:
Convert dbContext to use DI: no difference.
Use context lifetime to run the queries: works, but isnt the solution I want.
Remove asyncronous from requests: works, but also I feel is not the correct way to do.
What Im doing wrong?
EDIT 1
This is how I set up DI in Startup.cs:
private void AddAuth()
{
Builder.Map("/identity", app =>
{
var factory = new IdentityServerServiceFactory()
{
//here I implemented the IdentityServer services to work
ClientStore = new Registration<IClientStore>(typeof(ClientStore)),
[...]
};
AddDependencyInjector(factory);
}
[...]
}
private void AddDependencyInjector(IdentityServerServiceFactory factory)
{
//here I inject all the services I need, as my DbContext
factory.Register(new Registration<ValueContext>(typeof(ValueContext)));
[...]
}
And this is how my UserService is working:
public class UserService : IUserService
{
[Service injection goes here]
//this is a identityServer method using my dbContext implementation on UserRepository
public async Task AuthenticateLocalAsync(LocalAuthenticationContext context)
{
SystemType clientId;
Enum.TryParse(context.SignInMessage.ClientId, true, out clientId);
switch (clientId)
{
case 2:
result = await _userService.GetUser(context.UserName);
break;
case 3:
//also using async/await correctly
result = await _userService.Authenticate(context.UserName, context.Password);
break;
default:
result = false;
break;
}
if (result)
context.AuthenticateResult = new AuthenticateResult(context.UserName, context.UserName);
}
Update - After code posted
When using ASP.Net DI and IdentityServer DI together, we have to be careful to make sure that both the IdentityServer and the underlying DbContext are scoped to the OWIN request context, we do that by Injecting the DbContext into the IdentityServer context. this answer has some useful background: https://stackoverflow.com/a/42586456/1690217
I suspect all you need to do is resolve the DbContext, instead of explicitly instantiating it:
private void AddDependencyInjector(IdentityServerServiceFactory factory)
{
//here I inject all the services I need, as my DbContext
factory.Register(new Registration<ValueContext>(resolver => new ValueContext()));
[...]
}
Supporting dicussion, largely irrelevant now...
With EF it is important to make sure that there are no concurrent queries against the same DbContext instance at the same time. Even though you have specified AsNoTracking() for this endpoint there is no indication that this endpoint is actually the culprit. The reason for synchronicity is so that the context can manage the original state, there are many internals that are simply not designed for multiple concurrent queries, including the way the database connection and transactions are managed.
(under the hood the DbContext will pool and re-use connections to the database if they are available, but ADO.Net does this for us, it happens at a lower level and so is NOT an argument for maintaining a singleton DbContext)
As a safety precaution, the context will actively block any attempts to re-query while an existing query is still pending.
EF implements the Unit-Of-Work pattern, you are only expected to maintain the same context for the current operation and should dispose of it when you are done. It can be perfectly acceptable to instantiate a DbContext scoped for a single method, you could instantiate multiple contexts if you so need them.
There is some anecdotal advice floating around the web based on previous versions of EF that suggest there is a heavy initialization sequence when you create the context and so they encourage the singleton use of the EF context. This advice worked in non-async environments like WinForms apps, but it was never good advice for entity framework.
When using EF in a HTTP based service architecture, the correct pattern is to create a new context for each HTTP request and not try to maintain the context or state between requests. You can manually do this in each method if you want to, however DI can help to minimise the plumbing code, just make sure that the HTTP request gets a new instance, and not a shared or recycled one.
Because most client-side programming can create multiple concurrent HTTP requests (this of a web site, how many concurrent requests might go to the same server for a single page load) it is a frivolous exercise to synchronise the incoming requests, or introduce a blocking pattern to ensure that the requests to the DbContext are synchronous or queued.
The overheads to creating a new context instance are expected to be minimal and the DbContext is expected to be used in this way especially for HTTP service implementations, so don't try to fight the EF runtime, work with it.
Repositories and EF
When you are using a repository pattern over the top of EF... (IMO an antipattern itself) it is important that each new instance of the repository gets its own unique instance of the DbContext. Your repo should function the same if you instead created the DbContext instance from scratch inside the Repo init logic. The only reason to pass in the context is to have DI or another common routine to pre-create the DbContext instance for you.
Once the DbContext instance is passed into the Repo, we lose the ability to maintain synchronicity of the queries against it, this is an easy pain point that should be avoided.
No amount of await or using synchronous methods on the DbContext will help you if multiple repos are trying to service requests at the same time against the same DbContext.

How to correctly and safely dispose of singletons instances registered in the container when an ASP.NET Core app shuts down

I am looking for guidance on how to correctly and safely dispose of registered singleton instances when my ASP.NET Core 2.0 app is shutting down.
According to the following document, if I register a singleton instance (via IServiceCollection) the container will never attempt to create an instance (nor will it dispose of the instance), thus I am left to dispose of these instances myself when the app shuts down.
https://learn.microsoft.com/en-us/aspnet/core/fundamentals/dependency-injection?view=aspnetcore-2.0 (2.1 has the same guidance)
I enclose some pseudo code that illustrates what I am trying to achieve.
Note I am having to maintain a reference to IServiceCollection since the IServiceProvider provided to the OnShutDown method is a simple service locator and doesn't give me the ability to execute complex queries.
When the app shuts down I want a generic way to ensure all singleton instances are disposed. I could maintain a reference to all these singleton instances directly but this doesn't scale well.
I originally used the factory method which would ensure the DI managed the lifetime of my objects, however, the execution of the factory method happened at runtime in the pipeline of handling a request, which meant that if it threw an exception the response was 500 InternalServerError and an error was logged. By creating the object directly I am striving for faster feedback so that errors on startup lead to a automatic rollback during the deployment. This doesn't seem unreasonable to me, but then at the same time I don't to misuse the DI.
Does anyone have any suggestions how I can achieve this more elegantly?
namespace MyApp
{
public class Program
{
private static readonly CancellationTokenSource cts = new CancellationTokenSource();
protected Program()
{
}
public static int Main(string[] args)
{
Console.CancelKeyPress += OnExit;
return RunHost(configuration).GetAwaiter().GetResult();
}
protected static void OnExit(object sender, ConsoleCancelEventArgs args)
{
cts.Cancel();
}
static async Task<int> RunHost()
{
await new WebHostBuilder()
.UseStartup<Startup>()
.Build()
.RunAsync(cts.Token);
}
}
public class Startup
{
public Startup()
{
}
public void ConfigureServices(IServiceCollection services)
{
// This has been massively simplified, the actual objects I construct on the commercial app I work on are
// lot more complicated to construct and span several lines of code.
services.AddSingleton<IDisposableSingletonInstance>(new DisposableSingletonInstance());
// See the OnShutdown method below
this.serviceCollection = services;
}
public void Configure(IApplicationBuilder app)
{
var applicationLifetime = app.ApplicationServices.GetRequiredService<IApplicationLifetime>();
applicationLifetime.ApplicationStopping.Register(this.OnShutdown, app.ApplicationServices);
app.UseAuthentication();
app.UseMvc();
}
private void OnShutdown(object state)
{
var serviceProvider = (IServiceProvider)state;
var disposables = this.serviceCollection
.Where(s => s.Lifetime == ServiceLifetime.Singleton &&
s.ImplementationInstance != null &&
s.ServiceType.GetInterfaces().Contains(typeof(IDisposable)))
.Select(s => s.ImplementationInstance as IDisposable).ToList();
foreach (var disposable in disposables)
{
disposable?.Dispose();
}
}
}
}
It's the DI's job to dispose of any IDisposable objects it creates, whether transient, scoped or singleton. Don't register existing singletons unless you intend to clean them up afterwards.
In the question's code there's no reason to register an instance of DisposableSingletonInstance. It should be registered with :
services.AddSingleton<IDisposableSingletonInstance,DisposableSingletonInstance>();
When the IServiceCollection gets disposed, it will call Dispose() on all the disposable entities created by it. For web applications, that happens when RunAsync() ends;
The same holds for scoped services. In this case though, the instances will be disposed when the scope exits, eg when a request ends.
ASP.NET creates a scope for each request. If you want your service to be disposed when that request ends, you should register it with :
services.AddScoped<IDisposableSingletonInstance,DisposableSingletonInstance>();
Validation
For the latest edit :
By creating the object directly I am striving for faster feedback so that errors on startup lead to a automatic rollback during the deployment.
That's a different problem. Deployment errors are often caused by bad configuration values, unresponsive databases etc.
Validating Services
A very quick & dirty way to check would be to instantiate the singleton once all startup steps are complete with :
services.GetRequiredService<IDisposableSingletonInstance>();
Validating Configuration
Validating the configuration is more involved but not that tricky. One could use Data Annotation attributes on the configuration classes for simple rules and use the Validator class to validate them.
Another option is to create an IValidateable interface with a Validate method that has to be implemented by each configuration class. This makes discovery easy using reflection.
This article shows how the IValidator interface can be used in conjunction with an IStartupFilter to validate all configuration objects when an application starts for the first time
From the article :
public class SettingValidationStartupFilter : IStartupFilter
{
readonly IEnumerable<IValidatable> _validatableObjects;
public SettingValidationStartupFilter(IEnumerable<IValidatable> validatableObjects)
{
_validatableObjects = validatableObjects;
}
public Action<IApplicationBuilder> Configure(Action<IApplicationBuilder> next)
{
foreach (var validatableObject in _validatableObjects)
{
validatableObject.Validate();
}
//don't alter the configuration
return next;
}
}
The constructor gets all instances that implement IValidatable from the DI provider and calls Validate() on them
That's not accurate. Singletons are disposed at app shutdown, though it's kind of not actually all that relevant because when the process stops, everything goes with it anyways.
The general rule of thumb is that when using DI, you should use DI all the way down, which then means you'll almost never be disposing on your own, anywhere. It's all about ownership. When you new stuff up yourself, you're also then responsible for disposing of it. However, when using DI, the container is what's newing things up, and therefore, the container and only the container should then dispose of those things.
Thanks for the responses Panagiotis Kanavos and Chris Pratt and for helping to clarify how best to deal with this scenario. The two take away points are this:
Always strive to let the container manage the life cycle of your objects so when the app is shutdown the container will automatically dispose of all objects.
Validate all your configuration on app startup before it is consumed by objects registered in the container. This allows your app to fail fast and protects your DI from throwing exceptions when creating new objects.

Application Variables in ASP.NET Core 2.0

How would I go about setting and accessing application-wide variables in ASP.NET Core 2.0?
Details:
I have a variable, let's call it CompanyName, which resides in the database and is used on literally every page. I don't want to hit the database every time I need to display the CompanyName. 100 years ago, I would have set Application["CompanyName']=CompanyName but I understand that this is not the way to do things in .NET Core. What would be the alternative?
A lot has progressed in the last 100 years. Some time ago, I believe in ASP.NET 1.0, the Application object in ASP classic was superseded with caching (although the Application object was left in for backward compatibility with ASP classic).
AspNetCore has replaced the caching mechanism of ASP.NET and made it DI-friendly, but it is still very similar to how the state of things was in ASP.NET. The main difference is that you now need to inject it instead of using the static HttpContext.Current.Cache property.
Register the cache at startup...
using Microsoft.AspNetCore.Builder;
using Microsoft.Extensions.DependencyInjection;
public class Startup
{
public void ConfigureServices(IServiceCollection services)
{
services.AddMemoryCache();
services.AddMvc();
}
public void Configure(IApplicationBuilder app)
{
app.UseMvcWithDefaultRoute();
}
}
And you can inject it like...
public class HomeController : Controller
{
private IMemoryCache _cache;
public HomeController(IMemoryCache memoryCache)
{
_cache = memoryCache;
}
public IActionResult Index()
{
string companyName = _cache[CacheKeys.CompanyName] as string;
return View();
}
Then to make it work application wide, you can use a filter or middleware combined with some sort of cache refresh pattern:
Attempt to get the value from the cache
If the attempt fails
Lookup the data from the database
Repopulate the cache
Return the value
public string GetCompanyName()
{
string result;
// Look for cache key.
if (!_cache.TryGetValue(CacheKeys.CompanyName, out result))
{
// Key not in cache, so get data.
result = // Lookup data from db
// Set cache options.
var cacheEntryOptions = new MemoryCacheEntryOptions()
// Keep in cache for this time, reset time if accessed.
.SetSlidingExpiration(TimeSpan.FromMinutes(60));
// Save data in cache.
_cache.Set(CacheKeys.CompanyName, result, cacheEntryOptions);
}
return result;
}
Of course, you could clean that up and make a service with strongly typed properties as a wrapper around your cache that is injected into controllers, but that is the general idea.
Note also there is a distributed cache in case you want to share data between web servers.
You could alternatively use a static method or a statically registered class instance, but do note if hosting on IIS that the static will go out of scope every time the application pool recycles. So, to make that work, you would need to ensure your data is re-populated using a similar refresh pattern.
The primary difference is that with caching there are timeout settings which can be used to optimize how long the data should be stored in the cache (either a hard time limit or a sliding expiration).
You could create a Singleton-class called ApplicationWideSettings. Give that class public Properties. Initialize all the values you need one time and then use them by accesing the only instance of your class via:
ApplicationWideSettings.Instance.PropertyName;
Just make sure the namespace of the ApplicationWideSettings-class is referenced when you want to access it.
I prefer this over global/static settings because you have one class to save all your globally available data.
If you are unsure what a Singleton is I can just suggest you look into this article from Jon Skeet:
C# In Depth: Implementing the Singleton Pattern in C#

Categories

Resources