How to make startup Azure Function - c#

I have a Azure Function like that
[FunctionName("Function1")]
public static void Run([ServiceBusTrigger("myqueue", AccessRights.Manage, Connection = "AzureWebJobsServiceBus")]string myQueueItem, TraceWriter log)
{
log.Info($"C# ServiceBus queue trigger function processed message: {myQueueItem}");
}
I want to dynamic bind myqueue and AzureWebJobServiceBus connection string in a startup or OnInit of app without as method's parameter above. I mean, I want to a method run first of all like Program.cs in WebJob to binding or start up global variables. Can I do that in Azure Function and how to do it?
Many thanks

The attributes here are compiled into a function.jsonfile before deployment that has the info on what the binding talks to. Often things like the connection string reference app settings. Neither of these can be modified within the code itself (so a Program.cs couldn’t modify the function.json binding).
Can you share any more on your scenario? If you have multiple queues you want to listen to could you deploy a function per queue? Given the serverless nature of Functions there isn’t a downside to having extra functions deployed. Let me know - happy to see if we can help with what you need.

Edit
The suggestion below doesn't work for a Trigger, only for a Binding.
We have to wait for the team to support Key Vault endpoints in Azure Functions, see this GitHub issue.
I think what you are looking for is something called Imperative Bindings.
I've discovered them myself just yesterday and had a question about them also. With these type of bindings you can just dynamically set up the bindings you want, so you can retrieve data from somewhere else (like a global variable, or some initialization code) and use it in the binding.
The thing I have used it for is retrieving some values from Azure Key Vault, but you can also retrieve data from somewhere else of course. Some sample code.
// Retrieving the secret from Azure Key Vault via a helper class
var connectionString = await secret.Get("CosmosConnectionStringSecret");
// Setting the AppSetting run-time with the secret value, because the Binder needs it
ConfigurationManager.AppSettings["CosmosConnectionString"] = connectionString;
// Creating an output binding
var output = await binder.BindAsync<IAsyncCollector<MinifiedUrl>>(new DocumentDBAttribute("TablesDB", "minified-urls")
{
CreateIfNotExists = true,
// Specify the AppSetting key which contains the actual connection string information
ConnectionStringSetting = "CosmosConnectionString",
});
// Create the MinifiedUrl object
var create = new CreateUrlHandler();
var minifiedUrl = create.Execute(data);
// Adding the newly created object to Cosmos DB
await output.AddAsync(minifiedUrl);
There are also some other attributes you can use with imperative binding, I'm sure you'll see this in the docs (first link).
Instead of using Imperative Bindings, you can also use your application settings.
As a best practice, secrets and connection strings should be managed using app settings, rather than configuration files. This limits access to these secrets and makes it safe to store function.json in a public source control repository.
App settings are also useful whenever you want to change configuration based on the environment. For example, in a test environment, you may want to monitor a different queue or blob storage container.
App settings are resolved whenever a value is enclosed in percent signs, such as %MyAppSetting%. Note that the connection property of triggers and bindings is a special case and automatically resolves values as app settings.
The following example is an Azure Queue Storage trigger that uses an app setting %input-queue-name% to define the queue to trigger on.
{
"bindings": [
{
"name": "order",
"type": "queueTrigger",
"direction": "in",
"queueName": "%input-queue-name%",
"connection": "MY_STORAGE_ACCT_APP_SETTING"
}
]
}

Related

Azure functions output path returning UTC time even after changing Website time zone?

I have configured the Azure functions to output file to Blob container with current date using datetime function in binding but it is creating a UTC date folder.
I even changed the WEBSITE_TIME_ZONE to local time referring the list here but still creating UTC date folder in the blob when I want local time.
My binding code is:
{
"connection": "AzureWebJobsStorage",
"name": "Blobstr3",
"path": "outcontainer/{datetime:ddMMyyyy}/{rand-guid}.txt",
"direction": "out",
"type": "blob"
}
It would be great if someone could please help me out here?
There is no binding expression for local time.
The official Azure Functions binding expression patterns states:
The binding expression DateTime resolves to DateTime.UtcNow.
To use local time you would have save to blob yourself or use binding at runtime (again from Azure Functions binding expression patterns):
Binding at runtime
In C# and other .NET languages, you can use an imperative binding pattern, as opposed to the declarative bindings in function.json and attributes. Imperative binding is useful when binding parameters need to be computed at runtime rather than design time. To learn more, see the C# developer reference or the C# script developer reference.
Here's an example from how to do it from Develop C# class library functions using Azure Functions
Single attribute example
The following example code creates a Storage blob output binding with blob path that's defined at run time, then writes a string to the blob.
public static class IBinderExample
{
[FunctionName("CreateBlobUsingBinder")]
public static void Run(
[QueueTrigger("myqueue-items-source-4")] string myQueueItem,
IBinder binder,
ILogger log)
{
log.LogInformation($"CreateBlobUsingBinder function processed: {myQueueItem}");
using (var writer = binder.Bind<TextWriter>(new BlobAttribute(
$"samples-output/{myQueueItem}", FileAccess.Write)))
{
writer.Write("Hello World!");
};
}
}

How can I dynamically re-configure a Service during runtime in ASP.NET Core 5?

I am using Google authentication in my web app, and the OAuth keys are currently hard-coded in ConfigureServices:
services.AddAuthentication()
.AddGoogle(options =>
{
options.ClientId = "my-client-id";
options.ClientSecret = "my-client-secret";
});
However, I would like to give the site administrator the opportunity to change the ClientId and ClientSecret from the web app's settings page, preferably without having to restart the server.
To do this, I'd have to somehow trigger a re-configuration of the Google service and the GoogleOptions object when the user hits 'Save' on the settings page. This is what I'm having trouble with. Also, I would like to store these settings in an EF Core DbContext, and not in a physical config file.
So far I've tried to move the settings to a separate class that implements IPostConfigureOptions. This should allow me to inject my database context, because based on the documentation, PostConfigure is supposed to run after all other configurations have occurred. The settings are loaded correctly from this new class, but the injection of the DB context fails with the following exception:
System.InvalidOperationException: Cannot consume scoped service 'AppDatabase' from singleton 'IOptionsMonitor`1[GoogleOptions]'
This is weird, because the ConfigureGoogleOptions is registered as Scoped, and not as a Singleton.
Here's my options class:
public class ConfigureGoogleOptions : IPostConfigureOptions<GoogleOptions>
{
private readonly AppDatabase database;
public ConfigureGoogleOptions(AppDatabase database)
{
this.database = database;
}
public void PostConfigure(string name, GoogleOptions options)
{
options.ClientId = "my-client-id.apps.googleusercontent.com";
options.ClientSecret = "my-client-secret";
}
}
And registering it in ConfigureServices:
services.AddScoped<IPostConfigureOptions<GoogleOptions>, ConfigureGoogleOptions>();
Even if the databse injection worked, there's still a second problem. The PostConfigure function in my class only gets called once after the application starts, and never again. I assume that it caches the settings somewhere, and I don't know how to invalidate or disable this cache so I can dynamically provide values.
Short Summary / tl;dr:
I want to load the ClientId and ClientSecret settings of the Google OAuth service from my own database, and I want to be able to change them dynamically while the server is running.
Internally the google handler will use IOptionsMonitor<GoogleOptions> to get the GoogleOptions once or until it's reloaded (such as when the options is bound from a configuration file and saving the file will trigger the reloading). The IOptionsMonitor internally will use IOptionsMonitorCache and this cache is registered as singleton. So the options instance you get from IOptionsMonitor<GoogleOptions> is the same (reference) with the AuthenticationHandler<GoogleOptions>.Options which is used for various operations inside the handler. Even other code if using that options should correctly get it from IOptionsMonitor<GoogleOptions>.
So to change the options at runtime, it's just simple like this:
//inject the IOptionsMonitor<GoogleOptions> into _googleOptionsMonitor;
var runtimeOptions = _googleOptionsMonitor.Get(GoogleDefaults.AuthenticationScheme);
//you change properties of runtimeOptions here
//...
The important point here is we need to use GoogleDefaults.AuthenticationScheme as the key to get the correct instance of options. The IOptionsMonitor.CurrentValue will use the default key of Options.DefaultName (which is an empty string).

Botframework How to change table storage connection string in startup based on incomming request

I'm using BotFramework version(v4) integrated with LUIS. In ConfigureServices(IServiceCollection services) method in startup.cs file I'm assigning storage and LUIS in the middleware.Below is the sample code.
public void ConfigureServices(IServiceCollection services)
{
services.AddSingleton(configuration);
services.AddBot<ChoiceBot>(options =>
{
options.CredentialProvider = new ConfigurationCredentialProvider(configuration);
var (luisModelId, luisSubscriptionKey, luisUri) = GetLuisConfiguration(configuration, "TestBot_Dispatch");//
var luisModel = new LuisModel(luisModelId, luisSubscriptionKey, luisUri);
var luisOptions = new LuisRequest { Verbose = true };
options.Middleware.Add(new LuisRecognizerMiddleware(luisModel, luisOptions: luisOptions));
//azure storage emulater
//options.Middleware.Add(new ConversationState<Dictionary<string, object>>(new AzureTableStorage("UseDevelopmentStorage=true", "conversationstatetable")));
IStorage dataStore = new AzureTableStorage("DefaultEndpointsProtocol=https;AccountName=chxxxxxx;AccountKey=xxxxxxxxx;EndpointSuffix=core.windows.net", "TableName");
options.Middleware.Add(new ConversationState<Dictionary<string,object>>(new MemoryStorage()));
options.Middleware.Add(new UserState<UserStateStorage>(dataStore));
}
}
My bot will be getting requests from users of different roles such as (admin,sales,etc..).I want to change the table storage connection-string passed to middleware based on the role extracted from the incoming request. I will get user role by querying DB from the user-name which is extracted from the current TurnContext object of an incoming request. I'm able to do this in OnTurn method, but as these are already declared in middleware I wanted to change them while initializing in the middleware itself.
In .NET Core, Startup logic is only executed once at, er, startup.😊
If I understand you correctly, what you need to be able to do is: at runtime, switch between multiple storage providers that, in your case, are differentiated by their underlying connection string.
There is nothing "in the box" that enables this scenario for you, but it is possible if use the correct extension points and write the correct plumbing for yourself. Specifically you can provide a customized abstraction at the IStatePropertyAccessor<T> layer and your upstream code would continue to work at that level abstraction and be none-the-wiser.
Here's an implementation I've started that includes something I'm calling the ConditionalStatePropertyAccessor. It allows you to create a sort of composite IStatePropertyAccessor<T> that is configured with both a default/fallback instance as well as N other instances that are supplied with a selector function that allows them to look at the incoming ITurnContext and, based on some details from any part of the turn, indicate that that's the instance that should be used for the scope of the turn. Take a look at the tests and you can see how I configure a sample that chooses an implementation based on the ChannelId for example.
I am a little busy at the moment and can't ship this right now, but I intend to package it up and ship it eventually. However, if you think it would be helpful, please feel free to just copy the code for your own use. 👍

Invalid cache key parameter specified when enabling caching for a path parameter in AWS API Gateway

I have a serverless web API (API Gateway + Lambda) that I have built in C# and deployed via Visual Studio. This is achieved via a serverless.yml file that auto-creates a CloudFormation template, then that template is applied to create the API stack.
Once my stack is deployed, I have gone into the AWS Console to enable caching on one of the path parameters, but get this error:
!https://ibb.co/B4wmRRj
I'm aware of this post https://forums.aws.amazon.com/thread.jspa?messageID=711315&#711315 which details a similar but different issue where the user can't uncheck caching. My issue is I can't enable it to begin with. I also don't understand the steps provided to resolve the issue within that post. There is mention of using the AWS CLI, but not what commands to use, or what to do exactly. I have also done some reading on how to enable caching through the serverless.yml template itself, or cloud formation, but the examples I find online don't seem to match up in any way to the structure of my serverless file or resulting CF template. (I can provide examples if required). I just want to be able to enable caching on path parameters. I have been able to enable caching globally on the API stage, but that won't help me unless I can get the caching to be sensitive to different path parameters.
serverless.yml
"GetTableResponse" : {
"Type" : "AWS::Serverless::Function",
"Properties": {
"Handler": "AWSServerlessInSiteDataGw::AWSServerlessInSiteDataGw.Functions::GetTableResponse",
"Runtime": "dotnetcore2.0",
"CodeUri": "",
"MemorySize": 256,
"Timeout": 30,
"Role": null,
"Policies": [ "AWSLambdaBasicExecutionRole","AWSLambdaVPCAccessExecutionRole","AmazonSSMFullAccess"],
"Events": {
"PutResource": {
"Type": "Api",
"Properties": {
"Path": "kata/table/get/{tableid}",
"Method": "GET"
}
}
}
}
}
},
"Outputs" : {
"ApiURL" : {
"Description" : "API endpoint URL for Prod environment",
"Value" : { "Fn::Sub" : "https://${ServerlessRestApi}.execute-api.${AWS::Region}.amazonaws.com/Prod/" }
}
}
--Update Start--
The reason, you are getting Invalid cache key parameter specified error because you did not explicitly highlighted the path parameters section.
This is because, although the UI somehow extrapolated that there is a
path parameter, it has not been explicitly called out in the API
Gateway configuration.
I tested with below and was able to replicate the behavior on console. To resolve this, follow my Point 1 section full answer.
functions:
katatable:
handler: handler.katatable
events:
- http:
method: get
path: kata/table/get/{tableid}
--Update End--
Here you go. I still don't have your exact serverless.yml so I created a sample of mine similar to yours and tested it.
serverless.yml
functions:
katatable:
handler: handler.katatable
events:
- http:
method: get
path: kata/table/get/{tableid}
request:
parameters:
paths:
tableid: true
resources:
Resources:
ApiGatewayMethodKataTableGetTableidVarGet:
Properties:
Integration:
CacheKeyParameters:
- method.request.path.tableid
Above should make tableid path parameter is cached.
Explanation:
Point 1. You have to make sure in your events after your method and path, below section is created otherwise next resources section of CacheKeyParameters will fail. Note - boolean true means path parameter is required. Once you explicitly highlight path parameter, you should be able to enable caching via console as well without resources section.
request:
parameters:
paths:
tableid: true
Point 2. The resources section tells API Gateway to enable caching on tableid path parameter. This is nothing but serverless interpretation of CloudFormation template syntax. How did I get that I have to use ApiGatewayMethodKataTableGetTableidVarGet to make it working?. Just read below guidelines and tip to get the name.
https://serverless.com/framework/docs/providers/aws/guide/resources/
Tip: If you are unsure how a resource is named, that you want to
reference from your custom resources, you can issue a serverless
package. This will create the CloudFormation template for your service
in the .serverless folder (it is named
cloudformation-template-update-stack.json). Just open the file and
check for the generated resource name.
What does above mean? - First run serverless package without resources section and find .serverless folder in directory and open above mentioned json file. Look for AWS::ApiGateway::Method. you will get exact normalized name(ApiGatewayMethodKataTableGetTableidVarGet) syntax you can use in resources section.
Here are some references I used.
https://medium.com/#dougmoscrop/i-set-up-api-gateway-caching-here-are-some-things-that-surprised-me-7526d954fbe6
https://serverless.com/framework/docs/providers/aws/events/apigateway#request-parameters
PS - If you still need CLI steps to enable it, let me know.

How do I start a previously stopped Azure Container instance using C#?

I have the following code to stop an Azure container instance and would like to start it using similar.
using Microsoft.Azure.Management.Compute.Fluent.Models;
using Microsoft.Azure.Management.Fluent;
using Microsoft.Azure.Management.ResourceManager.Fluent;
using Microsoft.Azure.Management.ResourceManager.Fluent.Core;
var credentials = SdkContext.AzureCredentialsFactory.FromServicePrincipal("XXXX", "XXXX", "XXXX", AzureEnvironment.AzureGlobalCloud);
var azure = Azure
.Configure()
.WithLogLevel(HttpLoggingDelegatingHandler.Level.Basic)
.Authenticate(credentials)
.WithSubscription("XXXXX");
var containerName = "mycontainer";
var containerGroup = azure.ContainerGroups.GetByResourceGroup("myResourceGroup", containerName);
if (containerGroup.State == "Running")
{
containerGroup.Stop();
}
I would like to do the same and start my azure container instance. So where is containerGroup.Start(); ? This does not appear to exist in the interface. I have tried using containerGroup.Restart(); but this does not work from a stopped state. I need to be able to do this from within C# code and would like to avoid powershell if possible.
There is a way to do this but it is not exposed in the fluent API:
using Microsoft.Azure.Management.ContainerInstance.Fluent;
// azure is an instance of IAzure; the fluent Azure API
var resources = await azure.ContainerGroups.ListAsync();
foreach(var containerGroup in resources.Where(aci => aci.State != "Running"))
{
await ContainerGroupsOperationsExtensions.StartAsync(
containerGroup.Manager.Inner.ContainerGroups,
containerGroup.ResourceGroupName,
containerGroup.Name);
}
As mentioned by other people, you do need to realize that this is effectively starting a fresh container. No state will be maintained from the previous run unless you persisted that somewhere else like in a mounted volume.
You'll also need to grant the appropriate rights to whom ever is executing this code. I'm using a function so I had to setup a service account and a role, this blog post has all the details.
Update
The code I'm using is in on GitHub: https://github.com/alanta/azure_scheduler/blob/master/src/StartACIs.cs
Unfortunately, when you stop the container instances, they would be in the Terminated state and you cannot start them again.
Terminated or deleted container groups can't be updated. Once a
container group has stopped (is in the Terminated state) or has been
deleted, the group is deployed as new.
Even if you update the ACI, it also means the ACI would be redeployed. You can take a look at Update containers in Azure Container Instances. In addition, the Restart action also works when the container instances are in the running state.
So there is no start function in the C# SDK for you, at least now. Hope this will help you.
Update
Take a look at the event:
Each time when you start the container group after stop, the container group always these steps: pull the image -> create the container group -> start the container instances. So it’s clear, the container group was recreated when you start it after stop.

Categories

Resources